What Is A/B Testing?
A/B testing (also called split testing) is an experiment in which two variants of a page, creative, or email are shown to different segments of an audience simultaneously. By comparing performance metrics (conversion rate, CTR, AOV) between the two variants, marketers can determine which performs better and implement the winning version for all users.
In ecommerce, A/B testing is the scientific backbone of CRO - it removes guesswork from optimisation decisions and provides statistically valid evidence for which changes improve performance.
A/B Testing Visual Content
Product images and video are among the highest-impact elements to test, because they are the most visible elements on a page or in an ad, and their effect on conversion is large and measurable. Common visual A/B tests include: packshot vs. lifestyle image as primary PDP photo, single image vs. image gallery, static image vs. video, different background colours or lifestyle contexts.
AI photography tools like Bryft make visual A/B testing much more accessible by dramatically reducing the cost of producing test variants. What previously required a photo shoot to test can now be produced in minutes.
A/B Testing Best Practices
- Test one variable at a time to isolate the effect
- Run tests until statistical significance is reached (typically 95% confidence)
- Ensure sufficient sample size - under-powered tests produce misleading results
- Consider seasonal and day-of-week effects in test design
- Document all tests and results to build institutional knowledge
Real-World Example
A beauty retailer A/B tests their primary product image on their bestselling moisturiser. Version A: white-background packshot. Version B: AI-generated lifestyle image showing the product on a marble bathroom shelf. Version B achieves a 19% higher conversion rate at 98% statistical confidence, across 12,000 sessions per variant. The lifestyle image is rolled out as the primary image across the catalogue.