Significance

two coconuts
This is what came up when I searched for “compare.” This and lots of pictures of a meerkat which is apparently named Compare.

So you’re running a marketing campaign, because you are awesome and know that testing is your path to improved performance and general hosannahs.  You’ve got a couple of different banner ads with conversion rates of 4% and 5%.  (Hahaha, I know no banner ad ever in the history of the Internet has had that kind of conversion rate, but stay with me for a minute.)  Time to declare Mr. 5% the winner and move on, right?

Not necessarily.

You’ve probably noticed that your conversion rates don’t stay very stable– one week they’re down, one week they’re up, even if you haven’t done a thing to your campaign.  So how do you know that the one that looks to be the winner today won’t take a nosedive tomorrow?

By testing the significance of the difference between the two ads.

Results are considered statistically significant if it is unlikely that they occurred by chance.  Statistical significance is also sometimes referred to as a confidence level— the higher the significance, the more confident you are that differences between two results aren’t due to random chance.  Often a confidence level of 95% is considered the threshold to declaring a winner, but you may choose to do less if you’re trying to move through testing options quickly.

If you’re hungry for the stats– and who isn’t?– you can take a look here to see the specifics of how you can compare two different proportions (which is what a conversion rate is) to see if the difference between them is significant enough.

If you’d just like to skip to the part where you check your results, there are a couple of online tools you can use (here or here).

But if you’re like me you’ll quickly find that using the online tool gets tedious.  That’s why I created a spreadsheet that lets you input the impressions and conversions of the winning and losing ads and from that calculates the degree of confidence in the result.  You can find it here:

Split Testing Results Calculator

Having it in spreadsheet form makes it easier to use it for your own glorious purposes– for example, I created a different version that lets me paste in downloaded AdWords results and mark the winner and loser, and it automatically picks out the right stats and throws up my results.  Magic.

Quick note on the inputs

I mostly use this for PPC ad tests, although you can use it for banners, emails, and any old thing with a response rate.  You need two stats:

  • Population stat: this is going to be something like impressions, opened emails, etc. Basically, it’s how many people saw your thing.
  • Success stat: this is the thing you wanted to happen. God willing you’ll make it a conversion event and not clicks.

For PPC ads some people just use conversion rate, meaning conversions over clicks.  However, there could easily be a situation where an ad converted better but had a lower click through rate so that you end up getting proportionally fewer of the people who originally saw the ad.  Therefore I prefer to take it all the way from impressions.  Which of course means always having to calculate test results by hand, because Google doesn’t even offer conversions over impressions as a stat.

Now go be all scientific.

Photo by thienzieyung via Flickr.