Recently, there’s been a bit of buzz within the experimentation community around the concept of meta-analysis. For those unfamiliar with the concept, in a nutshell, it’s about looking at similar experiments altogether to pull insights. In fact, meta-analysis is actually considered the most trustworthy type of analysis out there. That said, it’s still prone to things like Simpson’s Paradox (where looking at a segment of an audience produces different results as compared to the whole), selection bias from which experiments you pool together (e.g. from convenience sampling where you only pool studies that are easy to find), statistical approaches (e.g. did the researcher p-hack?), and pooling studies that don’t actually answer the same hypothesis.
I won’t go into the nitty-gritty of performing a meta-analysis, but at the same time, I did want to address the question: If I ran a test in Italy, and again in Germany, do I have to run it yet again in Canada? There are generally two schools of thought:
- No. If you have learnings from 2 countries, don’t waste your time testing in another. You should have confidence that you’ll see the same result.
- Yes. Italy, Germany, and Canada are not the same country – they can react differently.
We could go back and forth on which approach is right. Personally, I prefer to approach such problems from a practical perspective. There is a cost to running (or not running) a test. Either you spend the money to set up, launch, and analyze the test (and potentially incur an opportunity cost); or you don’t run the test and see what happens. Admittedly, there is a third option, and that’s to launch the change and not look at the impact at all – but that is fairly irresponsible. This highlights the importance of reducing the cost of learning. If you can test very cheaply, then you have no reason to not run the test.
Another thing to consider is to have a way to measure whether the different audiences (or countries in this case) are similar. We do this where I work, actually. When we run experiments that involve our physical stores, we look at a number of metrics to ensure that stores are indeed similar. However, my company benefits from the fact that it has this data on its stores and areas they serve. However, this isn’t always the case – you often don’t have access to these metrics for different audiences. So that’s another thing to consider.
In the end, it comes down to how much exposure to risk you can tolerate.
Good luck and see you in 2 weeks!
Rommil Santiago
Founder, Experiment Nation
Connect with Experimenters from around the world
We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.
Advertisment
Categories
- Adventures in Experimentation (9)
- Analysis (2)
- Announcements (89)
- Ask the Experiment Nation (11)
- Certified CROs (193)
- Community (2)
- Conversation Starters (2)
- CR-No (5)
- CRO Handbook (4)
- CRO Memes (18)
- Experiment Nation: The Conference RELOADED (41)
- Experimentation from around the world (2)
- For LinkedIn (162)
- Frameworks (2)
- Growth (14)
- ICYMI (2)
- Interviews with Experimenters (207)
- Management (1)
- Marketing (11)
- Opinion (8)
- Podcasts with Experimenters (15)
- Point/Counterpoint (1)
- Product Experimentation (5)
- Product Management (9)
- Profile (9)
- Question of the week (5)
- Sponsored by Experiment Nation (1)
- The Craft of Experimentation (1)
- The Cultures of Experimentation (1)
- The weekly buzz (13)
- The year in review (1)
- Uncategorized (352)
- Weekly Newsletter (184)
- Where I Started (2)
Related posts:
- Have a narrow and well-defined ICP with Dr. Else van der Berg - October 4, 2024
- Product pages are the Grand Central Station of conversions with Rishi Rawat - September 27, 2024
- Speak the language of Profit with Ilan Hurwitz - September 20, 2024