A/B testing

Definition
A/B testing is a method of comparing two versions of a webpage, email, or digital experience to determine which one performs better. By randomly splitting traffic between version A and version B, businesses can measure which variant leads to higher conversions, engagement, or other key metrics.
For example, an online retailer may test two versions of a product page with different call to action buttons to see which generates more sales.
Advanced
A/B testing relies on statistical analysis to validate results and reduce the risk of making decisions based on random chance. Modern testing tools integrate with analytics platforms to track user behaviour and measure impact across multiple KPIs.
Advanced practices include multivariate testing, personalisation based on audience segments, and sequential testing to refine experiments. Machine learning powered systems can automate test allocation by directing more traffic to the better performing variation as results emerge. Successful testing also requires careful sample sizing and clear hypotheses to avoid biased outcomes.
Why it matters
Use cases
Metrics
Issues
Example
A subscription service tests two signup page designs. Version B features a simplified layout and fewer fields, resulting in a 20 percent lift in conversions compared to version A.