As it does sometimes… a personal online conversation started me thinking of something marketing completely unrelated to the conversation.
The conversation took place in my personal FB feed. It was about getting readers. You know the glasses that help you focus on things close. Something most of us need sooner or later.
With everything in lockdown here in Ontario, my friend and her friend were speculating (lol) on what prescription they needed for what would be their first readers. And wondered when they might get to see (another lol) an eye doctor.
Having worn glasses since I was 10 years old, I chimed in as an armchair expert. I said they could probably figure it out by trying on a few over-the-counter readers at their local drug store. Because IMHO the prescription part of an eye appointment is just a series of trials of is this one better than that, as the doctor swaps out one lens for another. Since readers are usually in a low narrow range of strengths, they could figure out something themselves. At least for the short term. (The important part of seeing a professional is to check the health of your eyes, so please don’t take my comments as medical advice not to go.)
Long story short, it got me thinking about A/B marketing tests to improve conversions. To me the subjective lens test is an A/B test in many ways. With a captivated audience. And a repeated series of subjective tests of this or that. Maybe I’m simplifying it.
Only, with marketing A/B tests we don’t have a captivated audience we repeat the test on that help us narrow things down by telling us their choice. Instead, we usually compare the actions or results from a large audience split across two independent options.
I believe most things that we want to learn would take forever to find out in an A/B test.
In true A/B testing, you need to design your experiment well. You need a big enough independent audience to trust the results. Hold all else equal. And run things in parallel, to keep context the same. And run for enough time to be confident in your results.
It has its uses. But maybe there are alternatives.
Maybe it’s not the colour of the buy button or it’s placement on the page. Or the subject line of a specific email. Maybe it’s because it’s the fifth email. Or they came to your site from a certain channel. Or clicked through after viewing a comparative product.
Reframe the problem. Dig a little deeper into your current wins and see if you can maximize what is currently working. Step back and consider the bigger picture and the flow a customer takes – through your touchpoints and their purchases.
Be open to testing more than A or B. And possibly testing in combinations. This can sound like you are breaking the control aspect of strict A/B testing. But it might highlight more learning faster. For example, maybe test 5 lead magnets and really look at persona characteristics of who selects which one. As well as what and when they do as their next journey step.
I know. I know. It sounds like more work. And maybe it is. It’s ongoing effort rather than a discrete point in time test.
Though by reframing to observe what is working rather than seeking what might work, you may have better long-term results from our marketing tests. As well, you learn to respond to future changes quicker. For when A or B no longer works.
Maybe A/B test the two approaches to prove which is best, lol.
Do you like this topic? You might also enjoy these blog posts: