Meta-analysis: Do I have to run my test yet again?

Recently, there’s been a bit of buzz within the experimentation community around the concept of meta-analysis. For those unfamiliar with the concept, in a nutshell, it’s about looking at similar experiments altogether to pull insights. In fact, meta-analysis is actually considered the most trustworthy type of analysis out there. That said, it’s still prone to things like Simpson’s Paradox (where looking at a segment of an audience produces different results as compared to the whole), selection bias from which experiments you pool together (e.g. from convenience sampling where you only pool studies that are easy to find), statistical approaches (e.g. did the researcher p-hack?), and pooling studies that don’t actually answer the same hypothesis.

I won’t go into the nitty-gritty of performing a meta-analysis, but at the same time, I did want to address the question: If I ran a test in Italy, and again in Germany, do I have to run it yet again in Canada? There are generally two schools of thought: 

  1. No. If you have learnings from 2 countries, don’t waste your time testing in another. You should have confidence that you’ll see the same result.
  2. Yes. Italy, Germany, and Canada are not the same country – they can react differently.
You might also like:   The challenges nobody talks about with Taz Risma

We could go back and forth on which approach is right. Personally, I prefer to approach such problems from a practical perspective. There is a cost to running (or not running) a test. Either you spend the money to set up, launch, and analyze the test (and potentially incur an opportunity cost); or you don’t run the test and see what happens. Admittedly, there is a third option, and that’s to launch the change and not look at the impact at all – but that is fairly irresponsible. This highlights the importance of reducing the cost of learning. If you can test very cheaply, then you have no reason to not run the test.

Another thing to consider is to have a way to measure whether the different audiences (or countries in this case) are similar. We do this at Loblaw Digital, actually. When we run experiments that involve our physical stores, we look at a number of metrics to ensure that stores are indeed similar. However, Loblaw Digital benefits from the fact that it has this data on its stores and areas they serve. However, this isn’t always the case – you often don’t have access to these metrics for different audiences. So that’s another thing to consider.

You might also like:   Jonathan Argile is an Experiment Nation Certified CRO

In the end, it comes down to how much exposure to risk you can tolerate.

Good luck and see you in 2 weeks!

Rommil Santiago
Founder, Experiment Nation

Connect with Experimenters from around the world

We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.


Rommil Santiago