A/B Testing

Compare two versions of a demo and understand which one performs better

Available from the Growth plan.

Overview

A/B Testing lets you compare two versions of a demo and understand which one performs better based on real user behavior.

Instead of guessing or debating internally, you can run controlled experiments and optimize demos for:

  • Higher completion rates

  • More CTA clicks

  • More leads captured

  • Stronger buyer intent

This feature is designed specifically for marketing and growth teams using demos as a conversion asset.


Who this feature is for

  • Product marketing teams

  • Growth & CRO teams

  • Demand generation and website teams

If demos are part of your acquisition, activation, or lead capture flow, A/B Testing helps you continuously improve performance.


How A/B testing works (conceptually)

An A/B test compares:

  • Variant A - your original demo

  • Variant B - another demo (ideally, a modified, duplicated version of the same demo)

Traffic is automatically split between the two variants, and Storylane tracks how each version performs across key metrics.

Both variants receive real traffic, so results reflect actual buyer behavior - not simulations.


Best practice: duplicate the demo for variant B

Before creating an A/B test, we suggest duplicating your original demo, making changes, and using it as variant B.

This ensures:

  • Identical demo size and layout

  • Clean, comparable results

  • Only one variable is being tested

⚠️ Changing multiple things at once makes results harder to interpret.

Recommended: Duplicate → change one thing → test.


What you should test

Marketing teams commonly test:

  • Lead form vs no lead form

  • Short vs long demo

  • CTA at the start vs CTA at the end

  • Gated vs ungated experience

  • Use-case intro vs feature-first intro

  • Video + voiceover vs silent demo

  • Strong CTA copy vs soft CTA copy

Always test one variable at a time.


Understanding Traffic Split

Default: 50/50

Best for:

  • Most experiments

  • Faster, balanced learning

When to Adjust the Split

You may want to change the traffic split when:

  • Protecting a high-performing demo

  • Testing a major structural change

  • Running experiments during high-traffic campaigns

Example:

  • 80% → proven demo

  • 20% → experimental version

This balances learning speed with conversion risk.


Understanding Test Results

Clear Winner

One variant consistently outperforms the other across key metrics.

No Clear Winner

This is still a valuable outcome. It tells you that the tested change did not significantly impact performance.

Use these learnings to inform your next experiment.


Ending an A/B Test

When you’re ready, you can end the test to stop traffic splitting.

If Variant A Wins

  • No action required

  • Your existing embedded demo link already points to Variant A

If Variant B Wins

  • Update the demo link wherever it’s embedded (website, landing pages, campaigns, emails)

  • This ensures future traffic goes to the better-performing demo

Ending a test is a deliberate decision and cannot be undone.


Tips for Better Experiments

  • Test one variable at a time

  • Let the test run long enough to collect meaningful data

  • Avoid ending tests too early

  • Use results to plan the next iteration

A/B Testing works best as a continuous optimization loop, not a one-off task.

Last updated

Was this helpful?