What Happened When I Wanted to Use ion for Some Complex Calculations

Interactive calculators with complex logic? Yes, we’ve got that! 

The ion platform has always been a great place for testing different types of interactive experiences against each other in real-time. After setting up a test, the platform does all the hard math to determine a statistically significant test winner without you having to think about it. This could be in the middle of the night when you’re fast asleep and hopefully not counting conversions jumping over a fence.

If you’re like me, when you’re awake, you want to know how your test is performing and how much longer until a winner will be declared. In our software, we have a fairly obscure, yet extremely, helpful reporting gauge called “creative confidence interval." In this gauge, you can see a couple of parabolas that give you an idea of how your test is performing and which interactive experience is trending ahead in that test. Explaining this gauge to a non-statistician is admittedly not a picnic but once you understand how it works, it definitely helps illuminate the performance of a test as it is under way.

This gauge allows you to see your current conversion rate as well as projections for each creative in the test at 80% and 95% confidence. If you’re looking to optimize your test at 90% or 99% confidence though, there’s a little guesswork involved for projections. 

When teaching our customers about this gauge, I often get the question, “How do I know when the test will be over?” 

Before we get into how I tried to solve this problem, a little backstory on me, Dave the Product Specialist, might be helpful. I’m a bit of a math nerd and I love challenges. Before a customer visit a few years ago, the customer asked me to spend some time explaining how the ion platform calculates statistical confidence. I took this as a personal challenge to tackle—explaining something really complicated as simply as possible. 

I wasn’t going to back down from this. Long story short, I bugged everyone here at ion I thought might know how the platform technically calculates confidence beyond the explanation “when this one curve passes this other one, you have a test winner.” In my pestering, I learned that the platform uses the Wald method to project conversion rates moving forward. If you’re not familiar with the Wald method, it’s this fun little formula here where p is the conversion rate, n is the number of visitors and it uses a zα/2, a fixed value that corresponds to a certain significance level. If you want to go that far, you can Google a z-score table to see the values for each level of confidence. Fun stuff!


Wald_Method

Flash forward to now, when the platform has the ability to do math so our customers can build calculators without coding. Well, I’m wearing my math nerd hat one day and started thinking about the creative confidence intervals gauge a bit more and why we haven’t ever really been able to put together an easy way for users to see 90% and 99% confidence projects or to see about how much more traffic would be needed before a test winner could be declared. 

After bugging some of the other nerds here at ion, I discovered that the Sqrt function can be used to perform square roots. I decided to put together a quick interactive experience with a form that includes two text boxes - one for total visitors and another for the conversion rate - as well as radio buttons to select the level of confidence. You can put your numbers in and it will calculate the higher and lower-end projections at 80%, 90%, 95% or 99% confidence in simple numbers that are easy to read. 

Wow. I knew the calculations I set up (without code) in the ion platform were interesting, but they wouldn’t be practical or helpful unless two experiences could be compared against each other.  So I made some modifications to add another form with a couple more text boxes, create new data fields to keep everything separate and eventually, I had something that would give me two different sets of numbers (one for each creative) that I could compare to each other. When the lower-bound number of one is higher than the upper-bound of the other, we have a test winner.

This was great to be able to easily see the numbers at any of our available levels of statistical confidence in the platform but I also felt like something was missing. I kept thinking about if there was a way to know how much longer the test would need to run before a test winner would be declared. For this, I had to reduce the number of variables involved. While I know conversion rates can fluctuate, this would have to operate under the assumption that conversion rates for each creative would remain static (rarely the case in a real test, but still a necessary assumption for the calculations). I also came to the conclusion that I couldn’t really figure this out using time (i.e. how many more days until the test ends) since there is no time variable in the Wald method and the thought of more Googling to figure out how to incorporate time into all of this was almost too overwhelming.

I started to think that if I could just come up a way to compare a formula for one creative to another and solve for n, I could figure something out that I could also feed into ion’s advanced rules to have it calculate how much traffic is needed to find a test winner. I started crunching numbers (or letters really), throwing away too much paper and finally coming to the conclusion that my ability to do complex math is not where it was in high school (on a side note, if you’re trying to do this at home, you’ll want to keep in mind that the formulas on either side of the greater than symbol can’t be combined since they should never have the same number of visitors or conversion rate). 

Then a light bulb went off and I realized that all I really had to do was append something that would allow someone to add more traffic to the formula and see how the numbers change. This would give me the ability to just go back and forth until I see where one creative will outperform the other and get an idea for how much more traffic would be needed to reach a test winner.

So I added a text box below where the projections are shown on the second page to add more traffic and sent that form to a third page where the new calculations would be shown.  In a perfect world, the platform would just tell me how much more traffic is needed but that would probably require me to pull out my AP Calc book from high school which I definitely don’t have anymore and try to relearn this stuff. Ain’t nobody got time for that! This page is just meant to be something quick that I can play around with to find projections at any level of statistical significance available in the platform and play around with the numbers to see when a test winner would be declared (assuming the conversion rates stay the same). 

Creative_Confidence_Level

If you’re interested in checking it out, you can find my ugly, but functional, page (I’m not a designer and I spent all the time working on the calculations, not on creating the page) here.


PS. We never discussed statistical confidence in that customer meeting. When we shook hands and exchanged business cards at the end, I asked them if they had any questions on statistical confidence and told them I came prepared to do math. Turns out I wasn’t in Good Will Hunting and no one really cared to see me do math and just took my word for it that I could have done it.