A/B test

A controlled experiment that compares two variants to estimate the causal impact of a change on a target metric.

When to use it

  • Validating UX, pricing, onboarding, or messaging changes.
  • Choosing between two implementation options with measurable tradeoffs.
  • Reducing decision risk before broad rollout.

PM decision impact

A/B testing reduces opinion-led decisions and helps teams invest in changes that move meaningful metrics.

How to do it in 2026

Define hypothesis, primary metric, guardrails, minimum detectable effect, and stop rules before launching the test.

Example

Variant B changes onboarding headline and checklist sequence; activation improves by 5.2 points without harming retention.

Common mistakes

  • Running tests without pre-defined decision rules.
  • Peeking early and stopping on noise.
  • Ignoring sample size and traffic constraints.

Related terms

Learn it in CraftUp

Last updated: March 6, 2026