A/B Testing Guidelines
This page outlines best practices and governance guidelines for running A/B tests while maintaining alignment with WestJets established Design System. It exists to ensure experimentation drives meaningful learning without fragmenting the user experience, eroding brand trust, or creating design debt.

A/B Testing Is a Validation Tool — Not a Design Tool
At scale, A/B testing should be used to validate hypotheses, not to invent new interaction patterns in isolation.
Effective A/B testing at scale focuses on:
- Measuring behavioral impact, not aesthetic preference
- Reducing uncertainty around known design options
- Validating assumptions surfaced through research and design exploration
Best-Practice Use Cases
A/B testing is most effective when applied to:
- Copy changes (microcopy, value props, CTAs)
- Hierarchy adjustments within existing components
- Default states (e.g., pre-selected options)
- Progressive disclosure timing
- Content sequencing or emphasis
- Performance or load-related changes
Enterprise-Specific Considerations
Large organizations must account for:
- Multiple teams shipping simultaneously
- Shared component libraries
- Accessibility, legal, and brand constraints
- Long-term maintainability
Key Principle:
If an A/B test cannot be implemented cleanly using existing Design System components, it likely shouldn’t be tested in production.
