How Henrik Kahra estimates the annual impact of Experiments

A Conversation with Henrik Kahra about Experimentation

I recently spoke to Henrik about how he estimates the annual impact of an experiment, what an ideal Experimentation team looks like, and how he convinces newcomers on the value of Experimentation.


Rommil: Hi Henrik, thanks for taking the time to chat today! How are you?

Henrik: Doing great Rommil, I hope you’re good as well.

I’m good thanks. So, Henrik, you’ve been testing for a long time. How about we start things off with you sharing a bit about what you do and your career journey?

Sure! My background is in marketing and I’ve always had a love for the web since an early age. I also appreciate travel, having lived in 5 countries. I naturally wanted to mix travel, web and marketing. My career began in travel technology and then about a decade ago I stumbled on the experimentation that the folks at Conversion Rate Experts were doing. I remember reading one of their case studies and that really got me hooked into all things testing. That mix of data, marketing, customer behaviour, and knowing what worked and what didn’t, was exciting. Currently, I’m leading the web experimentation efforts at Norwegian Cruise Line.

https://www.ncl.com/ca/en/

Very cool. Could you share with us what is your approach to optimization? Like, how do you decide where to start?

The starting point is asking questions. What has been done so far? What data exists today and what can we infer from it? After that, perform a good heuristic analysis and a walk-through from the vantage point of the customer. These efforts will then give a clearer picture of what to start building in terms of the testing roadmap, what data do we need (that we don’t have) and which areas of the site are most urgently in need of work.

How do you convince stakeholders who are hesitant to run Experiments to trust in that process?

I have been fortunate in that the organizations I’ve been a part of have always understood the power of experimentation. Often the best approach is to just run some tests. If nothing is in place, show what others have done. The best is to show examples from not just the big tech firms but also from organizations that are in the same industry. Also good to showcase what some old dinosaurs, who have now adopted a data-informed culture, have done. This can open eyes. The trust then builds over time. Wins can help but also results that show a completely opposite result to what stakeholders expected. That’s when usually the whole idea that we aren’t as good at guessing what our customers want begins to grow. Another crucial way to build trust is to be honest about what experiments can and cannot answer. And sometimes experiments go wrong. Maybe the data is BS because of a configuration issue or a collection problem. Owning up to that is important because it positions the optimization team as a data-driven function that takes data seriously. Owning up to hiccups builds trust because it shows there are real people behind testing. Nobody likes a bunch of optimizers coming in and telling everyone at a company how to do things. This ownership breaks down some of those barriers and builds trust.

“Nobody likes a bunch of optimizers coming in and telling everyone at a company how to do things.”

After hearing about a successful test, senior leaders often ask, “How will this impact our annual revenue?” How do you respond to that? And do you have advice for others facing tough questions like this?

There are several approaches to tackling the revenue question. First, if the revenue is only tracked on the frontend, then be upfront about it and explain that the revenue is directional. It’s not money in the bank. So assume that some of it may be duplicated revenue. Orders will be cancelled etc. Taking X% out of what the frontend revenue is showing is prudent. Second, tie the experiment tool/platform to backend revenue if possible. That gives a much clearer view into what revenue impact an experiment is having. Now, the annual part. Forecasting what revenue a test will bring over the next 12 months can be challenging. If you had a test running in December 2019 that showed an 8% revenue per visitor increase, I’m not sure that 8% would have stuck given everything that happened in the first half of 2020. COVID-19 simply turned the world upside down. So the annual revenue part needs to have some logic built into it. A simple model is fine as long as the assumptions are clear. And in my view, it’s important to be conservative here. Adding some decay is also important. I would assume a 20–30% decay over the 12 months and not forecast beyond 12 months as it’s too far out. As for other tough questions from senior leadership; be open and honest about things. If there’s no data on something, then say so. If it’s hard to get that data, say so. Explain where you stand. Maybe the experiment didn’t collect data on that metric, be clear about it.

“A simple model is fine as long as the assumptions are clear.”

I’m with you. I’m a big believer in simple models with clear assumptions, myself. My opinion is that trying to estimate the annual impact from a single experiment is literally trying to model something based on one data point?—?it will be wrong. So I don’t bother doing anything overly complex.

Changing gears. What’s your opinion on how to staff Experimentation? What roles do you need to fill to succeed?

Experimentation teams differ widely depending on the company. I think it’s important to have at least the following: an optimization team leader (this is the project manager/testing champion/data advocate), a frontend developer, a dedicated QA resource, an analytics/customer research specialist and a UX designer. Beyond that, there should be a focus around conversion copywriting and customer psychology. That’s a pretty good team!

I’d love to have a team like that! Heck, half of that team would be nice haha!

How do you describe the perfect Experimentation culture?

The perfect experimentation culture is one where testing is as evident and omnipresent as the business itself. One where testing is not merely a function tucked away somewhere but is embedded in ALL relevant decision-making.

I’ve always felt that you can tell how much an organization values Experimentation by the number of people hired to support it.

Do you have any advice for those looking to get into this field?

Getting into this field is easier now than ever! There’s so much information now; blogs, books, events, lots of people on LinkedIn willing to share knowledge. A lot has happened in the last 6–8 years. So getting into this field just means being curious about customer experiences, asking questions and seeking knowledge around what it takes to do conversion optimization. It’s a process with a lot of arms and legs but the information is all out there. Elon Musk read books on rocket propulsion, so it’s about acquiring that baseline knowledge and then just getting your feet wet.

Love it.

Could you describe your favourite experiment from recent history?

Going sometime back now, there was one where we showed form fields progressively instead of all at once, which worked. But when doing it in a different setting, the opposite was true. I found that really interesting because it just goes to show how we can’t assume anything.

I like that one. Never assume, right? Finally, it’s time for the Lightning round!

Frequentist or Bayesian?

Frequentist.

If you couldn’t be in Experimentation what would you do?

Filmmaker, race car driver or a chef.

Wow?—?I can hardly parallel park. lol

Describe Henrik in 5 words or less.

Passionate, curious, data-driven, meticulous, organized.

Amazing! Henrik, thank you for joining the conversation!

Thanks for having me Rommil!


Previous
Previous

Photomath’s Jennifer Lui on Experimentation and brand marketing

Next
Next

Journey Further’s Jonny Longden: Just test the bloody thing