Once you’ve got a hypothesis in hand, it’s time to get testing.
Split testing, aka A/B testing, means testing two options against each other at the same time and against the same randomized population. This is pretty easy to do in a lot of digital marketing:
- The major paid search providers allow for different ads to be rotated evenly against your keywords
- Many banner ad networks also provide for testing, although direct banner buys may not
- If your email marketing provider doesn’t give an automatic way to test emails, you can pretty easily split your list yourself and send out two different messages
- Services like Optimizely or Unbounce make landing page testing easy
Split testing is a great way for you to feel comfortable that the result you observe is a result of the thing you are testing and not some other random factor. For example, if you ran one search ad for two weeks with a conversion rate of 3%, then another for two weeks with a conversion rate of 5%, could you really be sure that the second ad won because it was better? Maybe it was because it was two weeks closer to Christmas. Or maybe it was because you had a site outage the first two weeks. Or you had some good PR the second two weeks.
But if you test the two ads against each other in a split test, alternating them evenly to a randomized population, then all those other factors don’t matter, and you feel comfortable saying the second ad was just better.
Keep It Under Control
So setting up split tests helps you minimize extraneous factors; you can minimize them even further by paying careful attention to how you set up your tests. Make sure you limit the differences in your two marketing assets to just the thing you want to test.
Say you want to test whether having a call to action in your search ad improves response. What would be wrong with these two ads as your test subjects?
The problem is, although one ad has a call to action (“Try It Now!”) and one ad doesn’t, there are also other differences between them. The headlines are different, the key messages are different, one is longer than the other. If what you really want to learn about is how effective a call to action is, you’re not going to get at it this way. You might see a difference in performance, but you won’t be able to confidently attribute it to the factor you’re looking at.
And then you’ve gone and wasted time and effort by setting up a test that didn’t teach you anything. Those happy scientists in the last slide? That makes them cry. EVERY TIME.
Don’t make the scientists cry.
Instead, make your ads largely the same except for what you want to test:
The first ad wins? We say, “Yay! Calls to action work! I will try this in other ads!”
This doesn’t mean, though, that all split tests must involve tiny little tweaks— it depends on what you’re testing. If you’re designing banner ads and you really don’t have a sense for whether a playful overall effect works better than a serious one, you’ll be putting out two pretty different ads. Once that larger question is answered you might move on to smaller changes of the concept, until eventually you’re changing background color or testing different pictures. And then you might just want to test the whole concept again against another concept.
The key is to control non-test factors as much as possible, so that you feel confident that you are learning something real.
Next up: what to do when split tests aren’t possible.
(Photo credit: shar ka via Flickr)