

The value and predictability of A/B testing ideas.A/B testing is inherently strategic (or, what’s the purpose of A/B testing anyway?).Here are the sections I’ll cover with regard to assumptions and a priori beliefs: The three levers that impact A/B testing strategy success on a program level.I’m going to break this into two macro-sections: So this article, instead, will cover the various answers for how you could construct an A/B testing strategy - an approach at the program level - to drive consistent results for your organization. there’s some nuance based on the company you work at, where you are in terms of company size and resources, and your traffic and testing capabilities. Rather, I think these answers can differ based on several factors, such as the culture of the company you work at, the size and scale of your digital properties, your tolerance for risk and reward, and your philosophy on testing and ideation. It could be the case that there is a single, universal answers to these, but I personally doubt it. How many variants should you run in a single experiment?.How frequently should I test, or how many tests should I run?.What goes into a proper experiment hypothesis?.What order should I prioritize my test ideas?.The other set of questions, however, is more strategic: Do you run one-tailed or two-tailed T tests, bayesian A/B testing, or something else entirely (sequential testing, bandit testing, etc.)?.
#Sequential testing beliefs software
Should you build your own custom experimentation platform or buy from a software vendor?.How do you properly log and access data to analyze experiments?.Which metric matters? Do you track multiple metrics, one metric, or build a composite metric?.One set of open questions about A/B testing strategy is decidedly technical:
#Sequential testing beliefs how to
This, in and of itself, should allow creativity and innovation to flourish, while simultaneously capping the downside of shipping suboptimal experiences.īut even if we all agree on the value of experimentation, there’s a ton of debate and open questions as to how to run A/B tests. In the span of 2-4 weeks, you can try out an entirely new experience and approximate its impact. It’s not only incredibly fun, but it’s useful and efficient. Whether you’re a product manager hoping to quantify the impact of new features (and avoid the risk of negatively impacting growth metrics) or a marketer hoping to optimize a landing page or newsletter subject line, experimentation is the tried-and-true gold standard. A/B testing is, at this point, widespread and common practice.
