You may have a clear winner by conducting A/B testing but what matters is the truth you learn by analyzing the A/B testing results that gets customers to say a ‘yes’ to your offer.

A/B testing is a great tool that makes testing fun and easy. As Bryan Clayton, CEO of GreenPal, puts it, Only with A/B testing can you close the gap between customer logic and company logic, matching the internal thought sequence that goes on in your customers’ heads while considering your offer.”

By conducting A/B testing, using an ab split testing tool, you can confidently conclude a winner but regardless of the overall results of the test it is imperative to dive deeper and gather insights to measure its success in terms of website conversion rate optimization or it loses all meaning. This is where analysis of the test results steps in.

And it is this analysis that helps you:

  • Get a wholesome understanding of the test results;
  • Reduce errors in judging the winner;
  • Decide on the future course of action confidently whether it has a declared winner or designing a better
  • variant based on the learnings of the test.

How do you analyze A/B testing results?

    • Decide the significance level for the test-This should be decided even before you start the test and would depend primarily on the related industry and how critical the results are for your business.
    • Create different segments- for your A/B testing and analyse them individually for better conversion rate to help you deploy variations based on specific users.

For e.g. If a test affects new and existing users in a significantly different way, then apply your variation to that specific user segment for Conversion Rate Optimization.
The segmentation to consider is;
Demographical, geographical and behavioural.

  • Track the right metrics-while analysing A/B testing results, you need to be aware of the right metrics. They may be primary and secondary and must be analysed individually. Relying on just one metric obscures the bigger picture so making an inventory of all the metrics before and after the test, works. There will be cases where you may have a non-conclusive result in terms of overall conversion goal, but the variant performs significantly better in terms of these metrics.
track-the-right-metrics
  • Analyse the overall goal conversion- Conversion being your macro goal for Conversion Rate Optimization, it is imperative that you look at how the original and variation performs on this parameter.

For instance, a re-designed form may be helping draw more conversions for the website.

But here’s the catch. The overall test results cannot be blindly relied upon to declare a winner or otherwise. It’s important to assess if the success or failure of the test can be rightly attributed to the changes tried to test through the variant as it’s not uncommon to find contradictory behavior between these two.

For instance, a re-designed form might be encouraging more leads, but it is not reflected as an improvement in overall website conversion rate because of a decline in conversion in other sections of the variant.

analyze-the-overall-conversion
    • Determine the sample size and have a clear hypothesis-A common scenario with organisations is that they run multiple tests year after year and despite having winners, the website conversion rate remains stagnant. One of the reasons could be a very small sample size. A good ballpark would be to have 350-400 conversions per variant. While conducting the test, it is also mandatory to have a clear hypothesis of A vs B that encapsulates proper conversion research to nail the problem areas to help you learn more about your target audience and devise better A/B testing in future. Based on the above factors you must choose the right statistical test.
    • The time factor-A lot of A/B testing reveals that a winning variation at almost 90% confidence ends up losing badly after giving it a chance. This is because of the test being called too early. It is not advisable to call tests before you reach a 95% or higher (leaving only 5% at odds) no matter how much excitement it sparks off.

Below are two clear examples to prove this.

This is after 2 days of starting the testing where one variation is losing by 89%

abtest-timefactor-shortrange

After 10 days the variation that was losing is winning with 95% confidence.

abtest-timefactor-longrange

ab split Testing tool is great, but it doesn’t do the thinking for you because acquiring information is just the initial step of any robust data analysis program. Transforming that information into insights and eventually, the insights into actions is what produces desirable results. So, if you want to stop burning a hole in your pocket, pause… think about the above points and make real progress.