Avoid making these common mistakes when running multiple variants - Experiment Nation

Avoid making these common mistakes when running multiple variants

If you aren’t careful you might select a less-than-optimum option

Often at times, we want to test multiple variants against a control at the same time. There are many reasons for this — sometimes we don’t want to wait to run each variant against a control in series, sometimes we’re indecisive, sometimes we don’t want to create another ticket for a future sprint (I don’t judge). Whatever the reason is — here we are, with multiple variants running at once. At least we’re not running an MVT. (Joking aside — there are some justifiable reasons to run MVTs but I’d only do so after running a series of split tests. The up-front work to set up an MVT is worth the trouble in my opinion. Don’t @ me.)

One common mistake is not to account for multiple comparisons. Simply put, the idea is that if you dig hard enough, you’ll eventually find something that’s statistically significant — which is most likely a false positive. A common procedure to control for multiple comparisons is to use something like a Bonferroni correction (remember kids, Wikipedia is not a reliable reference). This correction, in essence, decreases the alpha of your test — i.e., making it harder for a variant to be determined as different by making the threshold harder to reach. For those familiar with Adobe Target, you can see an option to correct for this on their sample size calculator below (I personally don’t love that it’s a bit hard to see.)

See also  Cloudflare’s Scott Olivares: To get personalization right — have good manners and experiment

However, a more common mistake is to pick the variant that produces the highest lift without checking if it’s statistically different from the other variants. Unless you check the difference between the variants you risk actually selecting a less performant variant!

What other gotchas do you watch out for when running multiple variants? I’d love to hear them!


Connect with Experimenters from around the world

We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.

You may also like

Learn from Experimentation’s statistics leaders: Chris Stucchio, John Meakin, and Georgi Georgiev
Chris Stucchio, John Meakin, and Georgi Georgiev

A Panel Conversation on A/B Testing Statistics with Chris Stucchio, John Meakin, and Georgi Georgiev If you’re like me, you Read more

Autodesk’s Heather Voden on scaling marketing through Experimentation
Heather Voden

A Conversion Conversation with Autodesk’s Heather Voden Heather and I go way back. I had the pleasure of working with Read more

Brian David Hall: Sometimes you don’t seek out Experimentation — but it seeks you
Brian David Hall

A Conversion Conversation with Command Return Solutions’ Brian David Hall A lot of people feel Experimentation is a dry academic Read more

Cloudflare’s Scott Olivares: To get personalization right — have good manners and experiment
Scott Olivares

A Conversion Conversation with Cloudflare’s Scott Olivares I recently caught up with an old co-worker of mine from Autodesk, Scott Read more