Journey Further’s Experimenting in an Agile Environment with Magnolia Carvajal - Experiment Nation

Journey Further’s Experimenting in an Agile Environment with Magnolia Carvajal

Magnolia Carvajal

A chat with Journey Further’s Magnolia Carvajal about Experimentation

I recently chatted with Magnolia to get her take on whether we should avoid overlapping tests and how to kickstart a culture of Experimentation.


Rommil: Hi Magnolia, thank you for taking time away from your busy schedule to chat with me today. How are you?

Magnolia: Hello, no worries at all thanks for reaching out, I am very well, happy to be back in the office after Covid and finally enjoying the warmer weather 🙂

Can’t complain about more sunshine, that’s for sure. So Magnolia, I’d love to hear about how you got into Experimentation?

It was really random actually, I studied marketing with Psychology at university and at my first job at Tesco and I was lucky enough to be invited to a discussion around reducing average handling time on the websites; from that meeting I asked if they would put me on a digital marketing course through Tesco Academy and they did! I learnt a lot and then I got put to work on the Tesco Clubcard website testing to reduce the calls coming into the call centre — it was all CRO from there on.

What are your most and least favourite things about Experimentation?

Most Favourite thing: When you have done all the research, got all the data behind your test, you run the test and it LOSES. Why??

See also  ConvertCart's Shaivi Sahay on figuring out what to test first

Least favourite thing: As above.

Advertisement

Changing gears to something more tactical — What’s your view on deciding what to test?

Analyse, test, analyse, repeat. Also, insights taken from previous tests is a real driver of what to test.

A common question I hear is, “How many changes should be made to comprise a variant?” What’s your take?

Very much depending. I generally will only make one change unless it is a split URL or landing page test. Otherwise — how do you know what it was that worked/didn’t work?

How do you go about selecting a success metric, or a set of metrics?

Depends on the hypothesis and what we want to get from the test.

Companies that catch the testing bug always want to run more tests. What’s your opinion on concurrent testing? How do you account for potential interactions?

My opinion is that there is no way to not do it in an agile environment, but this does not mean running 2 tests on the same page as that is a direct crossover. However, running tests on the homepage and then further down the customer journey like the checkout is necessary in an agile environment. We account for potential interactions by segmenting properly when doing post test research and examining the interactions between any 2 sets of A/B tests

See also  Spencer Gray on how CRO is not a standalone service but rather a team sport

How do you know if a company has a culture of Experimentation?

You would know a company has a culture of experimentation if they had a data driven web development process and not just changing stuff from the top guy.

What steps can you take to help a company adopt one?

Keep testing — show them the results of failed tests including the money saved from dev work, show them that if you had released the failed test on to the website without testing it, it would have actually hurt the company’s conversion rate as well as losing money paying for dev work and therefore losing revenue.

Finally, it’s time for the Lightning round!

Bayesian or Frequentist?

Bayesian

Favourite experimentation tool.

My dev really likes SiteSpect

If you couldn’t work in Experimentation — what would you do?

Something outdoors, only experimentation will keep me inside!

Describe Magnolia in 5 words or less.

Analytical, Methodical, Creative, Innovative, Extrovert

Thank you!

And thank you 🙂



Connect with Experimenters from around the world

We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.

You may also like

Shopify’s Shanelle Mullin on tough buy-in conversations and how we can empower future optimizers

Tracy Laranjo chats with Shopify’s Experimentation & Analysis Lead, Shanelle Mullin, about tough buy-in conversations, her take on the “I Read more

Evolved Search’s Simon Clark talks about the importance of UX and his thoughts about Google Optimize

Simon shares how communication and UX are important to any optimization practice.

Microsoft’s Tim Mehta talks about managing an Experimentation Program

Tim Mehta talks about how he leverages both quantitative and quantitative research to find frustration points to optimize.

Rohit Rayala on how he supported over 25 clients and drove an average growth rate of 154% in 12 months

Rohit talks to us about a process called PSIP that hoped him drive 154% growth for over 25 clients.