In a recent post, Magnet Monster posted an interview with Jeremy Horowitz about what to do when you don’t have enough data. You can check out the post here.
Jeremy goes on to say that we aren’t in a science lab and we don’t have to wait for statistical significance. This, as expected, poked the CRO community the wrong way. However, resisting my knee-jerk reaction, I have to admit there is a lot of truth to what Jeremy says, though perhaps it could have been framed a bit better. (Or perhaps it was phrased perfectly since the CRO community engaged with the post).
Here’s my take based on my years of experience running hundreds of online Experiments (as well as some offline ones) for companies like Loblaw Digital (Canada’s largest retailer), Bell Canada, Autodesk, Ritual, 500px, Flipp, and theScore.
From a pure experimentation perspective – one absolutely can forego using statistics and just see what happens. If we can believe the dictionary, the dictionary definition of an experiment is: “a scientific procedure undertaken to make a discovery, test a hypothesis, or demonstrate a known fact.” A scientific procedure (aka the scientific method) states nothing about having to use a statistical approach for analysis.
That said, without statistical rigor (and, I’m not saying that only randomized control trials will allow for statistical rigor because, surprise, Experimentation is bigger than A/B testing on landing pages), the data isn’t very trustworthy and definitely increases the chances of making a wrong decision and drawing the wrong conclusions.
Now before the angry CROs claim victory, if one has the time, the funding, the agility, and the energy to go down potentially incorrect paths and roll back once it’s discovered that they are down the wrong path – one can do so. It’s a pretty expensive path, one I wouldn’t recommend in most cases, but it is a path.
In cases where you don’t have a lot of traffic, you might be better served in spending your time doing research and understanding the customer. But even with that done, you eventually have to try something on real people – so you’re back to our initial problem, not having enough traffic.
So assuming you have a good amount of research, and you need to try something, I’d personally lean towards finding several statistical approaches that worked with the available traffic and comparing the results to see if they aligned so that I could make a more trustworthy decision. For example, one could take a Bayesian approach (which performs well with low sample sizes), take bootstrap with a replacement approach, run repeated low-powered tests and run a meta-analysis, or even, in the interest of beating the already dead horse and writing run-on sentences, a roll out with a pre/post analysis (which is a form of pseudo-experiment where one could perform a paired-t analysis). Each of these approaches is statistical – just with varying levels of bias.
So to make a long story short, there are always statistical approaches one can take and one should explore them before potentially throwing away money. But at the end of the day, it comes down to choosing what kind of Experiment you want to run.
You’ll have to do a cost-benefit analysis (and understand your organization’s core competencies) and choose between a more trustworthy “science-y” approach or the see what happens route.
Which kind of Experiment would you run?
See you in 2 weeks,
Founder, Experiment Nation
Connect with Experimenters from around the world
We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.
- Adventures in Experimentation (9)
- Analysis (2)
- Announcements (10)
- Ask the Experiment Nation (14)
- Certified CROs (78)
- Community (2)
- Conversation Starters (2)
- CR-No (5)
- CRO Memes (15)
- Experiment Nation: The Conference RELOADED (29)
- Experimentation from around the world (2)
- For LinkedIn (66)
- Frameworks (2)
- Growth (6)
- ICYMI (2)
- Interviews with Experimenters (132)
- Management (1)
- Marketing (11)
- Opinion (3)
- Podcasts with Experimenters (14)
- Point/Counterpoint (1)
- Product Experimentation (5)
- Product Management (1)
- Profile (9)
- Question of the week (5)
- Sponsored by Experiment Nation (1)
- The Craft of Experimentation (1)
- The Cultures of Experimentation (1)
- The weekly buzz (13)
- The year in review (1)
- Uncategorized (127)
- Weekly Newsletter (151)
- Where I Started (2)
- How Europe leverages AI for optimization with Noémie Sauvage - August 13, 2022
- CRO Memes Digest #15 - August 13, 2022
- Lee Bradshaw is an Experiment Nation Certified CRO - August 12, 2022