Running AB tests is indispensable if you want to make good decisions. However, if you make mistakes, your decision can have worse consequences than simply deciding based on your intuition. Therefore, it is important to do A/B tests the right way, without mistakes.
If you are not worried about proper implementation of A/B tests, then you should forget doing A/B tests completely. You have only two options:
- Do A/B tests and correctly interpret the results without mistakes
- Completely forget about doing any A/B test and just use your common sense or intuition, which may mislead you.
In this guide, I will use the example of an online store for the sake of simplicity, but these mistakes can be generalized to any other kind of A/B tests for your business.
Suppose you have an online store and you want to check which “buy” button makes you the most money. Does the red button or green button have the better click through conversion rate? You measure 15% CTR on the red button and 10% CTR on the green button.
Mistake #1: Comparing the Past and the Present
Never compare a past conversion rate to the present conversion rate. If you had the red button in January and changed it to green in February, do not compare the CTR results with an A/B test. The experienced change in conversion might come from a difference in visitor-type distribution or any other common seasonal change. If you make this mistake, your results will be totally useless.
What to do?
If you have two variations, randomly (or alternately) show the different buttons to the visitors, but it is also very important that each visitor should always see only one variant for example, Visitor A should always see a red butten, and Visitor B should always see a green button.
Mistake #2: Checking the Conversion Rates Only
If you are just watching the raw conversion rates for example, a 15% CTR for the red button, and at 10% CTR for the green button then you are making a mistake. These changes in rates might be a coincidence, and you cannot be sure whether the red or green button is truly better. The meaning and the core essence of A/B testing is to eliminate this accidental difference and show you how likely it is that the change is real.
What to do?
Always examine the certainty (sureness level) of the A/B tests and the confidence intervals.
Mistake #3: Looking at the P-Value (Certainty) Only
Almost all A/B test calculators (except ours) display only whether the difference between case A and case B is significant or not. If yes (usually at 95%), they will show you a big green checkmark. Do not be satisfied with this! “Significant” only means that it is likely that the red button is somewhat better. If your experience a 15% vs. 10% conversion rate, and the result is significant, it does not mean that the red button is better by 50%. The red one might be only 0.001% better than the green button.
What to do?
Calculate the so-called confidence interval, which will tell you exactly what you want to know: what is the most likely range of the difference between the conversion rates of the variants.
Mistake #4: Not Testing for a Full Period
It is a mistake to your experiment for only five days from Monday to Friday. This range does not include weekends, and weekend purchasers might have totally different behaviors. Also note that testing in January always gives you results for January only. If your visitors behave differently in February, you will have no data for that.
What to do?
Always run your experiment for a full cycle that has a meaning for your business.
Mistake #5: Not Turning Your Results into Decisions
Running an experiment is great, and interpreting the results of the A/B test is better. However, if you do not use your data and turn your results into decisions, the whole thing is pointless and a waste of time. A/B tests are for helping you make a decision.
What to do?
It is a good idea to pre-establish the decisions you will make if case A or case B is the winner. For example, you may say before running the A/B test that you will change the button from green to red if and only if the red button is the winner and is better by at least 20%.
Mistake #6: Running an A/B Test Only Once
It is not enough to plan, implement, and run an A/B test and interpret the results, even if you do it the right way and make a decision based on the results. It is not enough because your business, your visitors, and their behavior are continuously changing. If you realize that red button has much better conversion rates than the green one, you only know that this was the case in the past. When the target audience changes, future visitors might prefer the green one.
What to do?
Repeat your A/B tests periodically to test whether differences in conversion rates persist because they might change as time passes.
Mistake #7: Forgetting to Test What Truly Matters
You may test the color of the buttons but what really matters might be the position, caption, or size of the button.
What to do?
Use your common sense and test the things that really can cause big changes in conversions rates and do not waste your time comparing “buy now” to “Buy Now,” for example.
Mistake #8: Disliking and Ignoring the Results
You surely have a preconception or intuition about what the results will be. You like to think that you are clever and that you can predict the outcome because you are an expert in your field. If the results of an A/B test contradict your previous findings, they can cause cognitive dissonance. The consequence of this situation is that many people will just ignore the results completely. You might think that the green button is better because it is more natural and more beautiful than the red, and users are not afraid to click it, and you will want to support this presumption with an A/B test. If the outcome shows that the red button is better by 50%, you should forget your preconception.
What to do?
Believe in your results. Even if you think that red is ugly, you have to change your button from green to red because the visual design of your website does not matter as much as the conversion rates and revenue.
Mistake #9: Not Understanding What 95% Means
95% certainty (or 0.05 p-value) does not mean that you can be sure about the results of an A/B test. It is likely that one variation is at least somewhat better than the other variant. It is normal to run an experiment until you reach 95% confidence, but take into consideration that this also means that there is 5% chance of a false positive. So if you run 100 different A/B tests, there will be approximately five tests that gave you the wrong result.
What to do?
Keep in mind that you can never be absolutely sure about the results. If the decision made based on your A/B test is very important and has serious consequences, then you may want 99-99.9% certainty.
Mistake #10: Tracking the Wrong Conversion KPI
If you optimize your website and the color of the buttons so that you reach very good CTR for the “add to cart” button, you may be wrong. The add-to-cart click through rate is important, but you should measure purchases instead. For example, with a red button, it is likely that number items placed in the cart will be higher than with a green button, but purchases may not be. Even if you optimize your site to have more purchases, it could cause that micro-purchases per visits will be higher but the total purchase amount per month could be less. If you use A/B tests, it is very important to define the correct conversion.
What to do?
Keep in mind that improving one conversion rate can cause other conversion rates to decrease.
Bonus mistake: Forgetting to Segment
Knowing that the CTR of the red button is better than that of thegreen button is not the whole picture. Maybe males prefer green, but females prefer red. Maybe impulsive purchasers prefer red but rich, wise, returning purchasers prefer green. Maybe youth prefer red but elders prefer green.
What to do?
Always gather as much data as you can and keep running A/B tests until you can segment the results.
Try out our free online A/B Split Test Calculator