Exploring the world’s cultures of Experimentation with Collin Tate Crowell

Collin Tate Crowell

A conversation with GO Group Digital’s Collin Tate Crowell about Experimentation

In the spirit of Experimentation, we’re going to try something new. We’re starting a regular series with my guest today, Collin, where we explore cultural and market differences around the world with respect to Experimentation. We feel this is a really interesting topic that doesn’t get a lot of attention. So, it should be fun and we’d love to hear your thoughts. Do you want more of this content? Less? Let us know!


Rommil: So Collin, how are you? I hear you’ve just come off vacation!

Collin: Fresh back from the Hawaii of Canada: Savary Island. I use a paddleboard to go crabbing every day. I’m such a geek I created a spreadsheet to track which traps were producing (or not). Crab Catcher 3000 (my kids name the traps) for the win. #testandlearn Rommil!

For the benefit of our readers, could you share with us what you do and a bit of your career journey?

I help companies build startups and MVPs. From Vancouver, Canada, I’m helping build out GO Group Digital, a JV between two of the largest experimentation consultancies and agencies in the world, Widerfunnel and konversionsKRAFT in Germany. Before GO, I was based in Asia helping build startups and media brands for a European media conglomerate. They plucked me up after I started a jazz bar and media consultancy and brand in Qingdao, China. The hallmark of my career is harnessing network effects with lean n’ mean teams.

Could you tell us a bit about the GO Group?

GO Group Digital does two things: 1) as a company, we build and optimize international test and learn programs for enterprise companies and 2) as a network of leading experimentation agencies, we share business and strategic know-how.

Since you’ve built many test and learn programs around the world, I’d love to understand how you define an experimentation culture and how you know when a company has it?—?and even more interesting when they don’t.

Sure, there are important cultural differences when it comes to delivering an irresistible customer experience but a healthy and effective culture of experimentation is universal (as far as I can tell) because it’s human-centric.

It recognizes that individuals thrive when we:

  • Believe our testing work has purpose;
  • Are trusted to deliver on that purpose; and
  • Are held accountable (good and bad) for our actions.

A dead-end program is one that can’t get proven-to-be-successful customer behaviours and insights implemented. It shows either there is a misalignment between the program and the company’s goals, the company is poorly run, or both.

What are some of the differences you’ve seen between cultures when it comes to building an experimentation culture?

I see different pictures.

Canadians and American companies rely WAY too much on what we call, “tool fuel” e.g. number of clicks, views, etc.

Germans are perfectionists.

The British aren’t bold (enough).

Australians need more practice but get straight to the pain points quickly.

You’ll face HIPPOs in Japan.

And so on.

Every test and learn culture can’t help but reflect its local market, which is why we think it’s so important to lean on an organization like ours, which creates global+local (“glocal”) teams.

That’s really fascinating. How do you measure the quality of a test and learn program? How do you know it’s going well?

We evaluate a test and learn program by asking:

  1. How well can it use data to ideate customer-centric solutions?
  2. Is it motivated to help achieve a clear business goal?
  3. How effective is it in turning insights into action?

As consultants, we spend most of our time at the executive level highlighting opportunity, as often, there is a disconnect or misunderstanding of the power and purpose of digital experimentation and the company’s primary business goal. But as test and learn agents, we deliver results i.e. actual output.

Speaking of output. Should companies focus on test volume? Why or why not?

Yes, but with a caveat. The goal is to learn and actuate, not to pump out tests. Organizations learn this crucial point as their test and learn programs mature, which means a company has to allocate resources and gain experience.

At first, your test and learn program is just not going to be that impactful. But with practice and by delivering customer-centric solutions that affect the business, you’ll a) get better at #testandlearn and b) get buy-in i.e. budget from executives.

So, if you’ve been testing and learning for years, I’m going to expect a certain test volume, velocity, AND sophistication. If after a few years, the company is pumping out one test a month and the head of their program is stuck in the basement, then, I…

But it’s not just maturity that you need to consider. It’s size i.e. resources. I like to use the Ivy-League admissions criteria to help executives think about how many customer insights (tests) they should be learning every month. If you’re the daughter of a president, Harvard is going to expect a lot more from you than a typical applicant.

Volume, velocity and test sophistication should be proportionate to your organization’s testing maturity and overall size.

I believe that if you think your program is making a business impact by pumping out one insight a month, then either you or your company is not thinking big enough.

In your opinion, how long does it take for a company to transform into one that embraces testing and learning? What are some of the biggest hurdles?

Let’s be clear, very few companies as a whole embrace testing and learning. Most often, a line of business or a division just gets it. These stars share four things in common:

  1. People are trusted and held accountable to do their jobs
  2. Executives believe in it and participate
  3. Testing serves a clear business purpose
  4. Hypotheses are derived directly from customer feedback

To get all four right takes time and brilliant leadership, which the market is loathe to give. Since it takes so long, look for leading indicators like, do you call it a winner or an insight? Do you have a library of test results that non-experimentation experts can understand? Are budgets allocated based on test velocity and number of test implementations? Can you get a straight answer when you ask, does every “winning” test get implemented?

It sounds cheesy, but the biggest hurdle is a lack of trust. Innovation and business growth come from change. Change always is risky. If an organization does not trust their employees, then they will not take risks. Without taking any risk, there can be no change. Innovation and business growth (how test and learn programs should be measured) will struggle in this setting.

“If an organization does not trust their employees, then they will not take risks.”

Before today’s chat, you mentioned that you specialize in applied behavioural science in terms of Experimentation?—?could you tell us move about that?

There are much smarter people than me in the Group that can explain, but I’ll give it a shot.

Psychology, economics, neuroscience, evolutionary biology, etc. have provided us with powerful principles on how the human mind works along with a long list of tactics to stimulate behavioural change (UX best practices, persuasion principles, etc.)

But best practices are not universal. The effectiveness and impact of the principles varies widely for different contexts and customer types.

Experimentation provides us with a method to both understand and validate the application of these principles in the real world. It provides us with a method to both understand the pressures influencing the behaviour of customers and to validate the use of design principles to change the perception of these pressures. We apply these principles on a broad and granular level for various customer/user types (segments) and contexts.

Simply put, applied behavioural science allows us to better understand why consumers behave one way or another and experimentation allows us to validate our hypotheses.

I’m only scratching the surface. You’d love talking to the heads of our BeSci teams.

“…be the laboratory, not the test.”

I look forward to it! Changing gears a bit: Where do you see Experimentation in 5 years?

I often repeat, “be the laboratory, not the test.” I believe that testing will continue to be commoditized meaning it’s going to be far more valuable to be able to ideate and innovate. Why go up against tech and optimize in the margins, when we humans have the skillset (and data) to explore?

I hear you. Finally, it’s time for the Lightning round!

Bayesian or Frequentist?

I demur with a quote. “If your experiment needs a statistician, you need a better experiment.”?—?Ernest Rutherford

Ha! Interesting take. I’m not sure I wholeheartedly agree with that, but we can pick this up another time.

If you couldn’t work in Experimentation?—?what would you do?

I’d build a business that sells outrageously hip cocktail cherries like Paw Paw’s Cocktail Cherries and incubate startups.

Biggest pet peeve about Experimentation.

There’s too much focus on exploitation when I believe the BIG opportunities are in the exploration part of our work. Our methodologies and ideation processes are perfect for innovative product and service development, but we often get relegated to milking cash cows. Be bold!

Also, no one in the industry seems happy with what to call our line of work. CRO is too puny and niche. Digital transformation too abstract. Growth… Too entangled with marketing. I lean to experimentation but I worry it’s too vague. Hypothesizing is the sexiest part of this industry. What experimentation strategist wouldn’t be seduced by the title, chief innovation officer? Ask me again tomorrow.

Describe Collin in 5 words or less.

Left and right brain executor

Collin, thank you for chatting with me today and for joining the conversation! Looking forward to future chats.

Thank you!



Connect with Experimenters from around the world

We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.

Rommil Santiago