Oops I Did It Again: Common Testing Mistakes and How to Fix Them

Ever wondered why your A/B tests aren’t delivering the results you expected? Reveal the most common A/B testing mistakes made by top companies and how you can avoid them.

Allon Korem
Chief Executive Officer

Over the past 10 years, I’ve been helping dozens companies with their A/B Testing. I came across false conceptions about testing and experimentations; I witnessed “abuse” of statistical tools and methods; and I had endless discussions about the importance of proper A/B Testing. Last week I had a great time talking with Skye Scofield & Tyler VanHaren from Statsig about the common A/B Testing mistakes companies make and how to tackle them.

Some takeaways:

  • Data Integrity: Keeping data accurate is crucial. Issues like sample ratio mismatches and technical errors can skew your results.
  • Proper Statistical Methods: Using the right statistical methods is essential. Misapplying tests or not using statistical metrics at all can lead to wrong conclusions.
  • Avoiding Peeking: Checking your data before the test is complete can lead to false positives. Using sequential testing can help prevent this.
  • Adequate Sample Sizes: Many tests fail due to insufficient sample sizes. Proper power analysis beforehand is crucial.
  • Bayesian vs. Frequentist Methods: While Bayesian A/B testing is often seen as immune to peeking, it doesn’t maintain the false positive rate (aka type I error). Properly used frequentist methods work just as well.
  • Practical Tips: Run A/A tests to validate setups, pick the right KPIs for your goals, and always conduct thorough statistical analyses.
Read next