3 Ways to Learn From a Flat AB Test Result

 

When I first started out in conversion rate optimisation, I always felt like ‘flat’ AB test results – where there appears to be no real difference in performance between the control version of a page and your challenger – were something shameful to be swept under the carpet. I must have made a mistake, right?

Actually, no. As with any other type of test result, there’s plenty to learn from flat results. Here are three steps to follow next time you get one.

1) Look at what you did (and when you did it)

Maybe your hypothesis was spot on but your execution wasn’t. For example, maybe your users are indeed struggling with that navigation structure but the new style you put in place in your challenger didn’t make things any easier for them.

Look at your execution critically. Is the way you did it truly the best way to prove your hypothesis? There’s more than one way to skin a cat – you might just need to go back to the design/prototyping stage.

Also look at the timing of your launch. I once launched an AB test only to discover that the company’s marketing team had launched a cashback site offer at the same time. This meant that users landing in the sales funnel were now converting at 80% across both variations – a huge increase on the pre-test conversion rates.

Under normal conditions the difference would have been a lot more dramatic, and so it proved when I ran the test again to produce a clear winner once the cashback promotion had finished.

If you think it’s worth rerunning the test with a different challenger (or at a different time), do it.

2) Drill down into the data

An uplift in one segment can offset a drop in another to produce what looks like a flat result overall.

Focusing only on the overall performance can disguise deeper opportunities. Unless you dig into performance across different segments, you’ll never know.

Split out test performance by device, traffic source, browser, operating system, new/returning visitors – even geo-location. You may need to run a test for longer to get a significant result in a more targeted segment, but taking a more granular view could highlight an opportunity to benefit a section of your user base with a more personalised approach.

A/B testing CRO

3) If the change didn’t matter enough, that matters

If your execution was right but had zero impact on any of your segments, the likelihood is that what you’re testing doesn’t matter to your users.

This a valuable lesson. You now know that your hypothesis was false and that the element you tested doesn’t matter much to your users.

It’s almost like a reverse exclusion test. In most cases, you can now deprioritise or simply cancel any future tests aimed at optimising that element of your website journey.

In terms of an outcome from your test in this situation, you can either promote your preferred challenger or keep the control in place – it doesn’t really matter. After all, it isn’t important to your visitors so it isn’t going to impact the bottom line for your business (unless one version is far better for SEO of course, in which case pick that one).

Find the thing your customers care out and improve that instead.

If you are in interested conversion rate optimisation for your business website, you can find out more about our lovely CRO team here.