Hey there! You’ve reached this book midway through. To read from the beginning, click here.
Chapter 12: Testing Hypotheses
“Okay,” Chuck said. “That rings a bell. How do we get started?”
“Well, the order is pretty clear. It’s set up that way on purpose,” Carole said.
Chuck gave her a dirty look, but nodded.
“The good news is that a lot of the steps are the same for any profit engineering effort,” I said. “The trickiest part is the hypothesis-testing phase.”
“That’s the actual experiment, right?” Carole said.
“And what’s tricky about it?”
“What’s not tricky is the infrastructure. There are a lot of tools out there that can give you all kinds of numbers about any variable you want to measure.”
“So, what’s tricky then?”
“Making sure you set up the experiment so you measure what you think you’re measuring.”
“Ever hear that old joke about how a scientist looked at traffic patterns and decided that people could get to and from work faster if we changed the work day to noon to eight?”
“No. How’s it go?”
“He’d noticed that traffic got bad at 9 and at 5, so he figured people should commute at other times.”
“Oooooooh,” said Carole.
“Gotcha,” said Chuck
“So, here’s how we avoid making that kind of mistake. “
Of the stages in the scientific method, experimentation is the most complex, intimidating, and easy to mess up. It’s also where the most important parts of science happen. Every time you read articles with contradictory scientific findings, you’re usually looking at a poorly conducted experiment.
Every time somebody makes a disastrously wrong call about marketing and conversion, you’re either looking at a poorly conducted experiment or (much more often) no experiment at all.
So, here’s how to conduct your conversion experiments correctly.
Elementary, My Dear Watson
All experiments consist of a handful of carefully controlled elements. If your exploration into your conversion methods includes careful construction of each of these, it should yield good results. If an element is missing in your consideration, or not tightly controlled, your conclusions will be questionable. And decisions based on questionable conclusions often yield poor results.
An experiment is only as good as its hypothesis: a statement or question that defines the purpose of the experiment. Operating without one is like going on a cruise without a destination. Even with excellent maps and a high-quality boat, you’ll get somewhere…but you might not necessarily end up where you want to go.
The good news about hypotheses for profit engineering is that you’ll be able to use the same ones over and over again. You’ll just slot in different variables and subjects.
For example, in an A/B test, your hypothesis could be “Which of these options generates the most leads over one week?” One time “these options” might be two different Facebook ads. The next, “these options” might be two banner ads. A third time, it might be a pair of titles for your value-added ebook.
Variables are the factors in an experiment that you change and compare to one another — the “moving parts” of the machine. It’s important to limit the number of active variables in any experiment, otherwise you might be measuring things that are different from what you’re trying to figure out.
When discussing variables, there are two kinds you need to pay close attention to.
- Control variable. This is something you measure without making any changes, so you can be sure that whatever you’re doing in your experiment has a meaningful effect. In medical experiments, the control variable is often a placebo. Test subjects are given a sugar pill and told it’s medicine, to see what impact nothing at all has on them. This is then compared to the impact of the medical treatment, to make sure it’s worth doing.
- Confounding variables. These are things that impact a test subject unintentionally, in ways that might confuse you when you analyze your data. For example, because of what time rush hour happens in most cities, an uninformed researcher might conclude that dusk and dawn causeheavy traffic. The fact that rush hour often happens at sunrise and sunset is a confounding variable.
It’s your job to construct your profit engineering experiment to include one (and only one) control variable, and to avoid or understand confounding variables as much as possible.
With an A/B test, your control variable is your baseline number of website hits or similar performance metrics. If those don’t change while running your tests, you can conclude that neither of your marketing options are performing well enough to use.
The calendar is an example of a confounding variable for A/B testing. If you run a sports betting website, and run test A the week before the Super Bowl and test B the week after…test A is going to overwhelmingly outperform test B. But that won’t be because test A was the better piece of marketing.
Metrics are the numbers you compare when looking at the results of an experiment. It’s possible to carry out an experiment and get meaningful results without specific, empirical metrics, but it’s really hard.
The good news is that modern marketing software (more on that in a bit) automatically generates tons of metrics for you to analyze. A few of the most common and important are:
- The number of impressions a given tactic generates
- The conversion rate of a given tactic
- The ROI for a given marketing strategy
- Your P-Value: your confidence level that the numbers you’re looking at actually measure what you’re trying to measure.
In A/B testing, your metric will be some measure of how well people respond to each of the two messages you’re testing against one another. If you’re comparing two emails, your metric would be the number of click-throughs generated by each.
One final note on metrics: Pay attention to significance. It’s impossible to cut out every possible confounding variable from your experiment, because you’re conducting it in the real world. Any difference in your results that amounts to less than 5% of the total variance probably shows that your two tests aren’t significantly different, or that you’re comparing them using the wrong metric.
Why Test Hypotheses?
Setting up, conducting, measuring, and analyzing profit engineering experiments is hard. So why bother?
You bother because this kind of tight observation, measurement, and analysis allows you to make the small course corrections that add up to getting where you want to be as quickly as you can.
Keep in mind that profit growth happens at several different points, and a gain at each point can mean huge gains in your overall income. Testing hypotheses helps you determine the best places to aim for those gains.
For example, say you have a 1% response rate on a given ad campaign and a 10% conversion rate on those responses. Adding 1% to that conversion rate improves your sales by 1%.
But adding 1% to that response rate doubles the number of responses, increasing your sales by 100%.
Testing hypotheses helps you see those opportunities and act on them.
Modern technology makes this possible today in ways that have never before even been imaginable. And if it’s possible to grow your business by Y reliably using careful experimentation coupled with unbiased analysis, then it’s your responsibility to do exactly that.
This is where the engineering and scientific approach really begins to diverge — at least in language — from the traditional way of marketing for businesses. But now you understand how the scientific method guides powerful profit engineering. You also understand which steps in the scientific method are most effective in analyzing your marketing efforts. You understand what you must change, measure, and analyze to put the right changes in the right places and maximize your marketing return.
Ryan Flannagan is the Founder & CEO of Nuanced Media, an international eCommerce marketing agency specializing in Amazon. Nuanced has sold $100s of Millions online and Ryan has built a client base representing a total revenue of over 1.5 billion dollars. Ryan is a published author and has been quoted by a number of media sources such as BuzzFeed, CNBC, and Modern Retail.