A/B testing is dead. Although it’s been a powerful tool to improve marketing campaigns and conversion rates, I can’t say I’m sad to see it go. Not only can it be administratively cumbersome and easy to mess up the process, but it's also incredibly wasteful. Now, thanks to advances in deep learning and data science, there is a superior way to make strategic directions and optimize campaigns for success.
A powerful tool misapplied
A/B testing is an amazing tool for validating whether changes to an asset or campaign improve the KPI that is being optimized.
However, all a test will tell you is whether a version with one change is better than the other. It won’t validate whether or not it was a good idea to invest in that concept in the first place. In other words, A/B testing is great for finding a local maximum, but is not able to surface what is the best of all possible options. It’s just as likely that an option that hadn't been considered is the one that should have been chosen to begin with.
In other words, just testing alone leaves a lot of upside on the table.
Here’s an example from one of our customers, Kao Corporation’s Salon division. They were determining John Frieda’s marketing strategy for the year and weren’t sure which direction they wanted to invest in for their imagery. Should they pursue an aspirational luxury look, go for something more natural, or focus on the products?
As a large global company, it's impractical to do small tests of each in every single market they operate in and THEN decide what direction they should heavily invest in. So they do what most companies do, decide based on opinions, go with hunches, or maybe pay some high-priced agencies and consultants. By the time they are able to even test the content, they’ve already committed to a direction without any scientific basis for if that investment was the best.
What if a lifestyle concept was the most effective? Decisions like this are getting marketing executives fired, All. The. Time. And no amount of “testing and learning” as to whether this caption of that caption gets slightly better clicks is going to change the fact that the campaign was doomed to fail from the outset.
These are high stakes decisions that cost and yet A/B testing is entirely useless. The reason they’re such high stakes is simple. For a company to test and learn how to create the best assets requires a lot of resources, and the cost of testing a sub-optimal hypothesis is high.
Waste is a Feature, Not a Bug
By the time a test is ready to be run, a lot of time, money, and energy has already been invested. Sticking with the Kao example, after they choose the brand strategy they’ll be pursuing for that year’s cycle, the marketing machine then takes over.
Brand and creative department heads create the briefs and direction for all the content that will be created.
A mix of agencies, contractors, and internal teams create the content.
Sometimes brands will do pre-testing or focus groups here, but they have their own issues with small sample sizes, Hawthorne Effect Bias, and high costs.
Preliminary assets are reviewed and some of the assets are thrown away for “not being on brand” or for other reasons that are some variation of “I don’t like it.”
The final set of assets is handed off to the performance and brand marketing teams.
A/B testing of individual assets commences.
By the time a company gets to A/B testing, everyone on your team has invested so much political, emotional, and monetary capital there’s no turning back. Whether the initiative was a sound idea to begin with or not, marketers then use A/B testing as a way to make minor optimizations around the edges, but mostly as a crutch to bring limited meaningful data into a process that up to this point has been almost entirely gut-driven.
Only after they have invested hundreds of thousands or millions of advertising dollars to “test and learn” is the campaign judged to be a success or a failure. And regardless of the outcome, the process starts all over again from there.
No wonder the CMO’s head is the first to roll when things go bad.
Okay, so what’s the alternative?
Great question. To find the answer we have to ask ourselves, what’s the point of all of this testing any? It’s actually pretty simple. To uncover what will inspire a target audience to take a desired action.
For a long time, A/B testing has been one of the only scientific tools marketers have to answer that question reliably. Even if it came with all these drawbacks and compromises.
Thankfully, next-generation analytics powered by data science and deep learning can give marketers powerful insights into what visual, text and contextual cues are driving audiences to convert (or not). Using these insights at the top of the decision-making process makes investing heavily in marketing initiatives FAR more effective, and takes the guesswork out of the process where it really matters.
How do these next-generation analytics work and how can you use them to get off A/B testing over-dependence? I’ll cover that in the next post.
Check back Friday to find out.
Want to see what you can do instead of A/B testing?