Getting Started with A/B Testing
Discover which link variations drive the most conversions and revenue with A/B testing. Learn when to test, what to test, and how to interpret your results.
About A/B Testing
A/B testing lets you compare two different link variations to determine which performs better. Instead of guessing which product, discount, or offer will resonate with your customers, you can test and make data-driven decisions.
When to Use A/B Testing
A/B testing is valuable when you:
- Launch new products - Test which product drives more interest
- Run promotions - Compare discount strategies (% off vs $ off)
- Optimize campaigns - Find the best offer for email, social, or paid ads
- Test bundles - Determine which product combinations sell best
- Maximize revenue - Identify which variation generates the most revenue per visitor
Getting Started
How A/B Testing Works
When you create an A/B test, Checkout Links automatically:
- Splits traffic between your two link variations (Link A and Link B)
- Tracks performance for both variations independently
- Analyzes results with statistical confidence calculations
- Recommends a winner when sufficient data is collected
Your customers are automatically and consistently routed to the same variation for a seamless experience.
Creating an A/B Test
Select Link A and Link B — Choose two existing Checkout Links to compare in your test. These should be different variations of the same concept.
Configure traffic split — Set what percentage of traffic goes to each variation. The default 50/50 split works for most tests.
Launch your test — Save and activate your A/B test. Share the test URL to start collecting data.
Understanding Your Results
Winner Declaration
The Result card shows you which variation is performing better and provides guidance on when it's safe to end your test.
Confidence Levels:
- More Data Needed: Less than 10 total sessions — keep testing
- Low Confidence: Clear winner but need more data for reliability
- Medium Confidence: Strong signal, consider ending test at 75%+ confidence
- High Confidence: Statistically significant results — safe to end test
Key Metrics Comparison
Compare essential performance metrics between your variations:
- Sessions: Total visitors to each variation
- Orders: Completed purchases from each variation
- Revenue: Total sales generated by each variation
- Conversion rate: Percentage of visitors who complete a purchase
- Revenue per visitor: Average revenue generated per session
- AOV: Average order value for completed purchases
Traffic Split Strategies
50/50 Split (Recommended)
- Best for: Most A/B tests
- Fastest path to statistical significance
- Equal exposure for both variations
90/10 Split
- Best for: Testing risky changes
- Limits exposure to potentially worse variation
- Takes longer to reach significance
80/20 Split
- Best for: Gradual rollouts
- Balanced risk and speed
- Good for testing new features
Troubleshooting
Your test needs more traffic to provide meaningful results. Share your test URL more widely or wait for organic traffic to build up data.
Minimum thresholds:
- 10 total sessions to start analysis
- 100+ sessions for reliable results
When conversion rates differ by less than 0.5%, the difference may not be meaningful for your business. Consider running the test longer, accepting that both variations perform similarly, or testing more dramatic differences.
If Link A or Link B shows no sessions or orders, verify both links work correctly, check that traffic splitting is functioning, and ensure both variations are accessible to customers.
Related Articles
- A/B Testing Use Cases - Real-world examples and best practices
- Analytics Dashboard - Track performance across all your links and tests