A/B testing is a very useful gizmo that creates testing fun and straightforward. As Bryan Clayton, CEO of GreenPal, puts it, “Only with A/B testing you can close the gap between customer logic and company logic, matching the internal thought sequence that goes on in your customers’ heads while considering your offer.”
A/B testing using an ab split testing tool, you’ll confidently conclude a winner but, no matter the general results of the test, it’s imperative to dive deeper and gather insights to measure its success in terms of website conversion rate optimization, the data loses all meaning. This is where analysis of the test results steps in.
How it helps you:
- Get a complete understanding of the test results
- Reduce errors in judging the winner
- Decide on the future course of action confidently; whether it has a declared winner or designing a better variant based on the learnings of the test
However, this article will focus primarily on how you design the tests to accurately track significant changes in user behavior; some of the common testing drawbacks; how to analyze A/B test results; and how to get the most insight when interpreting A/B test results.
- The importance of macro-conversion metrics: When configuring a test, you must track, or intend to track, the primary macro-conversion elements for the website. These primary conversions can be a final sale on an e-commerce site, a subscription, or a completed lead generation form. These are the most important site-wide actions and the most important user KPIs that align with your business goals. Without tracking this, we will see an increase in clicks or other KPIs, but that will not give the real picture concerning business requirements. It’s also important to be mindful of what micro-conversion elements are bringing change in macro conversion.
- The importance of micro-conversion metrics: Tracking micro-conversions can either be the main goal or, they can be tracked for a macro conversion. Your goal might be click goals for key Call-to-Actions (CTAs), email newsletter signups, comments left on articles, content shared on social media, items added to cart, product page views, downloading eBooks, page scroll-depth tracking, and more. All this tracking will improve the quality of your findings. In some cases, micro conversions can begin to provide insights into why a test performed the way it did.
Common testing pitfalls
- Not enough conversions: The number of visitors may not be as important as the number of conversions while running an experiment. For example, if you have hundreds of thousands of visitors but they’re not converting, you can’t learn a lot about the difference between test variations. As a rule of thumb, you should target a minimum of 300-400* conversions for each variation before deciding.
- Testing for short duration: Particularly, larger businesses, with huge traffic volumes and large numbers of conversions, tend to make the mistake of stopping tests too early. The minimum cycle will vary for each business but, for many, the acceptable duration is of a week. Running tests less than every week may mean that you simply miss out on any daily trends or patterns. From our experience, we recommend running tests for a minimum of two basic business cycles. This allows you to capture changes in weekly trends and makes your conclusions more robust.
- Statistical significance and error bar: Our experience has taught us to be cautious of statistical significance bars within testing tools. We aim to achieve a statistical significance of >95% to call a test, but that can only be done when we have met our criteria for conversions and weekly cycles.
How do you analyze A/B testing results?
- First, decide the significance level for the test. This should be considered even before you start the test and would depend primarily on the related industry and how critical the results are for your business.
- Create different segments- for your A/B testing and analyze them individually for a better conversion rate. This will help you deploy variations based on specific users. For example, if a test affects new and existing users in a different way, then apply your variation to that specific user segment for Conversion Rate Optimization.
- Track the right metrics: While analyzing A/B testing results, you need to be aware of the correct metrics. They may be primary and secondary, but they must be analyzed individually. Relying on just one metric obscures the bigger picture so, making an inventory of all the metrics before and after the test, works the best. There will be cases where you may have an unconclusive result in terms of overall conversion goal, but the variant performs significantly better in terms of these metrics.
- Analyze the overall goal conversion: If conversion is your macro goal for Conversion Rate Optimization, you must look at how the original and variation performs in this parameter. For instance, a re-designed form may help draw more conversions for the website.
- Determine your sample size and have a clear hypothesis: A common scenario is that you end up running multiple tests year after year and, despite having winners, your website conversion rate remains stagnant. One reason could be a small sample size. A good ballpark would be to have 350-400 conversions per variant. While conducting the test, it is mandatory to have a clear hypothesis of A vs B that encapsulates proper conversion research to nail the problem areas. This will teach you more about your target audience and devise better A/B testing in the future. Based on the above factors you must choose the right statistical test.
- The time factor: A lot of A/B testing can reveal that even a winning variation at 90% confidence can lose. This is because the test was called too early. It is inadvisable to call tests before you reach 95% or higher (leaving only 5% at odds) no matter how much excitement it sparks. Below are two clear examples to prove this fact.
Segmentations to consider: Demographical, geographical, and behavioral.
But, here’s the catch. The overall test results cannot be blindly relied upon to declare a winner or loser. Assess whether the success or failure of the test can be attributed to the changes tied to test through the variant. It’s not uncommon to find contradictory behavior between these two. For instance, a re-designed form might be encouraging more leads, but it is not reflected as an improvement in the overall website conversion rate because of a decline in conversion in other sections of the variant.
A re-designed form might be encouraging more leads, but isn’t seen an improvement in the overall website conversion rate because of a decline in conversion in other sections of the variant.
This example is after two days of starting the testing where one variation is losing by 89%.
After 10 days the variation that was losing is now winning with 95% confidence.
The A/B Split Testing Tool is great, but it can’t think for you. Acquiring information is just the initial step of any robust data analysis program. Transforming that data into insights and, eventually, the insights into actions are what produces desirable results. So, if you want to stop burning a hole in your pocket, pause and think about the above points and make real progress.
Advance Analysis: How to get the most out of your test
- Push custom variables from your testing solution into your analytics tool. Analytics tools (like Google Analytics) will give you a much wider set of data with which to compare your test variations.
- Create custom segments based on your test segments to gain better insights and much more. Custom segments for each of your test variations allow you to review the full set of analytics data to analyze the impact on user type (new vs returning, traffic sources, average order value, products viewed and bought, etc.)
- Certain on-site survey tools will allow you to add test variables to the collected data. This means you can collect qualitative feedback from visitors on your test variations.
- This will require a savvy developer and technology but, with many testing tools, it’s possible to include offline conversion data into your tests. For example, if a visitor sees one of your variations and then converts over the sales call you can feed that data into your test analysis.
Avoiding common testing pitfalls will be a boon. Collect the right data, and carry out more meaningful post-test analysis, is all you need to do to have a better conversion from your AB test result.To know more about our A/B testing services, please feel free to reach out to us.