Did you ever do science fair in school? Creating that yearly experiment taught me the scientific method:
- Create a hypothesis
- Test it in a controlled way
- Gather results
- Draw conclusions that either confirm or invalidate the hypothesis
You’ll hear Lean Startup aficionados talk a lot about validated learning, and a key step in that is to first have something to validate– a hypothesis. In the context of developing product, a hypothesis is basically that customers will like this service or that feature and will be willing to pay for it. To test the hypothesis you develop and launch the feature and record whether it succeeds or not.
Test Why, Not Just Which
In the context of marketing, most people recognize that they should do some kind of testing of search and banner ads and the like. However, often their method for doing this is to throw up two different search ads, promote the winner, and stop the loser– basically, they go right for steps 2 and 3 of the scientific method. This is fine, and better than doing nothing at all, but your testing program will be even better if you add steps 1 and 4 into the mix. Then you’re not answering the question “Which ad is better,” but more importantly, “Why is it better?” You’re testing motivations for behavior that can enhance all your marketing.
For example, say you want to test the background color of two different banner ads. If you just randomly pick blue and red, and blue wins, you’ve really just learned that blue is better. Instead, if you were testing whether warm or cool colors work best, and blue (as a representative of cool) wins, you can further test different cool colors against each other, or test another warm/cool pair to see if it was the shade or the intensity.
With search ads, you don’t always have ad groups big enough to support a robust testing program– if you have 12 tightly-organized ad groups, for example, only 2 or 3 of them might have enough traffic to return results quickly. If you just throw up different ads in those groups, you learn what works for those groups. But if you test a theory such as “A call to action will increase response,” then if you validate that hypothesis you can add calls to action to lower-volume groups. You can also test more different things at once– you can test three different hypotheses in three different ad groups and learn in parallel. You might find that a dynamic headline, a call to action, and at least one exclamation mark works better in your search ads and then combine those into one ad to see if that’s even better.
In short, creating a hypothesis before you test is a key part of learning, not just optimizing.
(Photo credit: peaceplusone via Flickr)