A Conversation with CXL’s Gertrud Vahtra about Experimentation
I recently chatted with CXL’s Getrud Vahtra about some of the fundamentals of Experimentation and A/B testing including what kinds of research you should use to develop a hypothesis, the framework she uses to help prioritize them, as well as some of the pitfalls to watch out for during execution.
Rommil: Hi Gertrud how are you?
Gertrud: I’m doing great, thank you!
How about we start with a little bit about yourself and what you do?
I work in CXL Agency as an Optimization strategist, managing and working on the testing programs for a number of SaaS and eCommerce clients.
When I try to explain to my family or friends, what exactly I do?—?It looks something like this.
Ha! Someone used this meme to describe me a while back as well. That’s hilarious lol
In reality, I conduct user research to come up with data-informed decisions and hypotheses to A/B test on a website and prove their validity. The goal of the work is to either increase revenue, improve user flows or increase the conversion rate (whether it’s orders, sign-ups or subscriptions). These goals depend on the client as well, but simply said?—?my main focus is to improve the user experience on websites.
Can you share with us how you got into experimentation?
To be honest, three years ago I didn’t even know what CRO or A/B testing was. My Bachelor’s was in Media and Communications and while I did have occasional courses on marketing, copywriting or media technologies, my knowledge on this subject came from CXL courses way later. About two and half years ago, when I had just finished my second year of university, I started looking for internships, to gain experience on my field while also being able to afford living expenses in London. It was a tough journey, where I even worked for free in another company for around 3 months, while also working a bar job for living expenses and balancing my studies at the same time. My main goal was to graduate with some experience in the field and I am so glad that I decided to take this route.
CXL offered me a paid internship and that’s where I got more involved with this world of experimentation. After that, I have been building up in my career within that company and now get to manage some big clients and see my own ideas end up on their sites. Once you see your first experiment winning?—?bringing more revenue for the company or increasing the average order value, you will get hooked quite quickly.
In your opinion, what role does research have in experimentation?
Experimentation IS research. It’s just a bridge between data and action or decision. Therefore it’s critical to build that bridge with a solid foundation…a solid hypothesis test.
If not all, then most of your hypotheses should be supported by some type of research or data. The more it is supported by data, the more chances you have for the test having a bigger effect on the site, or even winning.
Totally. Make decisions based on evidence.
It’s also important to combine qualitative and quantitative data for thorough research. Quantitative data can often tell us about where people are dropping off, where the friction points are, while qualitative answers the “why” questions.
We have used the ResearchXL framework, where we mainly look at 6 different areas for data gathering and analysis, which is later turned into action items and a testing roadmap. These are the 6 areas:
- Web Analytics
- Mouse Tracking Analysis
- Heuristic Analysis
- Technical Analysis
- User Testing
- Qualitative Surveys
For example, if you want to find out more about user perceptions or their motivations, you should go for surveys or polls. However, if your goal is to find usability or clarity issues, the friction areas for your customers, you should have a look at user testing, session recordings, chat logs or passive feedback.
Research helps you find what your customers actually want or dislike. Each site is different and so is their target audience.
Once you have gathered all of the research, you need to categorize and prioritize the action items. Some of these items should be experimented on, to prove your hypothesis, while easy fixes for relevant issues should be implemented straight away.
I totally agree. However, I often hear people say that because they have so much information from other sources like surveys, analytics, etc. that they don’t need to run Experiments. How would you respond?
Experimentation lets you validate the changes you want to make on the page, but it also allows you to keep learning. As you run an experiment, you can collect measurements or samples on the relevant metrics you want to affect and later observe whether it’s truly having a positive or negative effect on the website conversions. You should also use statistics to measure how confident you are that those changes are actually reliable and calculate the statistical significance of your lift. The results from your experiments will give you confidence that you know what works for your site.
Digging into segments will also get you more insights on how your changes performed on different types of audiences.
100%. Not digging in here is such a missed opportunity.
Experimentation in general allows you to prove your hypotheses and get you more insights on how effective those changes are and whether you should iterate on them. Sometimes, it also tells you when your idea actually makes no difference on the site or even has a negative effect. In that case, you can move forward and test different concepts.
A/B testing also helps you reduce risks by avoiding costly, ineffective implementations. It helps you get clarity on whether some of the planned website launches or redesigns could have an opposite effect conversions and create more confusions, rather than solve them. It helps you rule out the changes that you shouldn’t move forward with
Having a strong experimentation culture in your company also keeps you innovating?—?constantly optimizing your site for improved content, user engagement and higher conversion rates.
I’m sold! Sign me up! lol
Changing gears to something more tactical. How do you go about formulating a hypothesis and what defines a good hypothesis?
Before working on hypotheses, I have a look at the testing roadmap, which is based on qualitative and quantitative research. Where are the gaps? Where do you have enough traffic and space for more tests. Once you find those 2–3 pages you should be working on, you go look at the previous research, analytics, heatmaps and maybe even session recordings.
My first question is whether there are hesitations or problems that the customers are facing. How can I solve those problems for them? Speaking to those friction points, hesitations or solving your customer’s most common problems will be a goldmine for winning experiments.
If there are no specific issues, then I look at what motivates the target audience. Can I highlight those features more? Can I make it more clear that they are getting the best value from this product or service?
When I have an idea of how to improve a specific page or flow for users, I have a look at the goals and tie that hypothesis in with the main KPI’s.
So, let’s say you have a bunch of hypotheses. How do you figure out where to start?
We use our own prioritization framework at CXL (called PXL), where you score the changes from your experiment on a binary scale to see if it would have a big enough of an effect. For example, you can look at whether those changes are above the fold, whether it’s running on high traffic pages, would it be noticeable under five seconds or whether it’s addressing specific issues discovered from qualitative feedback or analytics. Scoring your experiments in each of those areas will help you prioritize the tests better, so you know which ones might have a bigger effect. Those that would not have a high effect, you test when your roadmap is completely empty.
Moving to execution, as someone who has coordinated many experiments what kind of risks could one expect?
Some of the common issues can come from bugs and technical issues, especially if there’s not enough transparency between teams on new website launches and pages that are being tested on. If teams are not communicating those changes well, then there are higher chances of tests breaking in the middle. We have multiple layers of QA?—?before and after test launch, both agency and client side, to make sure that those experiments would not break mid-run. Sometimes it’s worth doing a double-check mid-test as well.
Another issue could be ending tests too early or seeing a SRM (sample ratio mismatch) issue with the numbers, after you have already stopped the experiment. It’s a waste of time if you have to run that again.
Finally, it’s time for the lightning round!
Bayesian or frequentist?
I can work with both, but I use Frequentist more often with our clients.
What is your biggest pet peeve in experimentation?
When your test has finished the design stage, developers have already created it and you have done all the QA?—?then it gets blocked. I feel like it’s a waste of people’s time and energy.
What is the most interesting thing you have researched for an experiment?
Whether children can learn to code from a young age and how it can affect their development later in life.
Wow that’s pretty cool.
Last but not least. Describe Gertrud in 5 words or less.
Reliable, curious, ambitious,
Thank you, Gertrud, for joining the Conversation!
Connect with Experimenters from around the world
We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.
Related posts:
- About our new Shorts series: CRO TMI - November 8, 2024
- Navigating App Optimization with Ekaterina (Shpadareva) Gamsriegler - October 18, 2024
- Building Your CRO Brand with Tracy Laranjo - October 11, 2024