Getting Started with A/B Testing

Discover which link variations drive the most conversions and revenue with A/B testing. Learn when to test, what to test, and how to interpret your results.

About A/B Testing

A/B testing lets you compare two different link variations to determine which performs better. Instead of guessing which product, discount, or offer will resonate with your customers, you can test and make data-driven decisions.

A/B testing works by splitting your traffic between two variations and tracking which one generates more conversions and revenue. Checkout Links handles all the statistical analysis automatically.

When to Use A/B Testing

A/B testing is valuable when you:

  • Launch new products - Test which product drives more interest
  • Run promotions - Compare discount strategies (% off vs $ off)
  • Optimize campaigns - Find the best offer for email, social, or paid ads
  • Test bundles - Determine which product combinations sell best
  • Maximize revenue - Identify which variation generates the most revenue per visitor

Focus your testing efforts on high-traffic campaigns where a small improvement in conversion rate can significantly impact revenue.

Getting Started

Identify what to test — Choose a specific element to test: product selection, discount type, bundle configuration, or offer structure. Create your variations — Set up two different checkout links (Link A and Link B) that differ in only one major way. Set up the test — Create an A/B test in Checkout Links, select your two links, and configure traffic splitting. Run the test — Share your test URL and let it run for at least 1-2 weeks to collect meaningful data. Analyze results — Review the performance metrics and statistical confidence to determine the winner. Implement the winner — Use the winning variation in your campaigns to maximize conversions and revenue.

How A/B Testing Works

When you create an A/B test, Checkout Links automatically:

  1. Splits traffic between your two link variations (Link A and Link B)
  2. Tracks performance for both variations independently
  3. Analyzes results with statistical confidence calculations
  4. Recommends a winner when sufficient data is collected

Your customers are automatically and consistently routed to the same variation for a seamless experience.

Creating an A/B Test

Create a new A/B test — Navigate to the A/B Tests section and click "Create test" to start a new split test. Set up your test details — Add a descriptive name for internal tracking and configure your test shortcode and URL.

The test name is for your internal use only — customers won't see it.


Select Link A and Link B — Choose two existing Checkout Links to compare in your test. These should be different variations of the same concept.

Link A and Link B must be different links. You cannot test a link against itself.


Configure traffic split — Set what percentage of traffic goes to each variation. The default 50/50 split works for most tests.

Traffic is split consistently — the same customer will always see the same variation.


Launch your test — Save and activate your A/B test. Share the test URL to start collecting data.

Understanding Your Results

Winner Declaration

The Result card shows you which variation is performing better and provides guidance on when it's safe to end your test.

Confidence Levels:

  • More Data Needed: Less than 10 total sessions — keep testing
  • Low Confidence: Clear winner but need more data for reliability
  • Medium Confidence: Strong signal, consider ending test at 75%+ confidence
  • High Confidence: Statistically significant results — safe to end test

Key Metrics Comparison

Compare essential performance metrics between your variations:

  • Sessions: Total visitors to each variation
  • Orders: Completed purchases from each variation
  • Revenue: Total sales generated by each variation
  • Conversion rate: Percentage of visitors who complete a purchase
  • Revenue per visitor: Average revenue generated per session
  • AOV: Average order value for completed purchases

Traffic Split Strategies

50/50 Split (Recommended)

  • Best for: Most A/B tests
  • Fastest path to statistical significance
  • Equal exposure for both variations

90/10 Split

  • Best for: Testing risky changes
  • Limits exposure to potentially worse variation
  • Takes longer to reach significance

80/20 Split

  • Best for: Gradual rollouts
  • Balanced risk and speed
  • Good for testing new features

Troubleshooting

Your test needs more traffic to provide meaningful results. Share your test URL more widely or wait for organic traffic to build up data.

Minimum thresholds:

  • 10 total sessions to start analysis
  • 100+ sessions for reliable results

When conversion rates differ by less than 0.5%, the difference may not be meaningful for your business. Consider running the test longer, accepting that both variations perform similarly, or testing more dramatic differences.

If Link A or Link B shows no sessions or orders, verify both links work correctly, check that traffic splitting is functioning, and ensure both variations are accessible to customers.

Related Articles