McAfee’s Claire Fitzpatrick: Experimentation isn’t something done at the end of a process?—?it’s part of a continuous customer-obsessed loop
A Conversion Conversation with McAfee’s Claire Fitzpatrick
One of the most dangerous things when running an Experimentation program is to allow it to be something that’s done in a silo?—?separate from everything else. It is a never-ending fight to ensure that Experimentation is part of a continuous customer-centric feedback loop along with other functions like Analytics, Customer Research, etc. Today, I chat with Claire about how she sees Experimentation as being part of this loop, how her team shares learnings, and what she’s currently reading.
Rommil: Hello! Welcome to Experiment Nation, Claire. It’s a pleasure to chat with you today! How have you been?
Claire: I’m doing great! Thank you for having me.
Great to hear! I’d love to hear a bit of your backstory and how it led up to what you’re doing today.
To be honest, I completely stumbled into experimentation. At the time, I was working on my graduate degree in marketing at the University of Texas at Dallas. I knew I loved data and consumer behaviour but didn’t know the specific career path I wanted to take. I was lucky enough to be able to take a hands-on course learning about Google Analytics and Adobe Analytics which really solidified my desire to be in an analytics-focused role. I worked with my professor who set me up with an interview for an internship at Vertical Nerve, a digital marketing agency here in Dallas. I interviewed with the analytics, SEO/SEM, and CRO departments. I had never heard about CRO or conversion rate optimization before, but it sounded interesting so when they offered me the position I took it! It wasn’t until I actually started working there did I learn what testing was. Thankfully, I fell in love with it!
“To be customer obsessed is to not only put yourself in the customer’s shoes, but to test and optimize experiences based on customer problems.”
As I understand, you’re a big proponent of “Customer Obsession”. At a high-level, what does this mean to you?
Customer obsession is definitely a buzzword right now, but I strongly believe in the intent behind it. To be customer-obsessed is to not only put yourself in the customer’s shoes but to test and optimize experiences based on customer problems. I recently attended a webinar where Intuit was discussing their “Designing for Delight” framework. Part of the framework is to have deep customer empathy and to fall in love with your customer’s problems. I love how customer-focused their framework is. I am passionate about building products and experiences that customers love using, but it’s not always a home run the first try. That’s why experimentation and continuous iteration is so important. Test until you get it right and then keep testing.
“There should be a continuous feedback loop, both qualitative and quantitative, throughout the testing lifecycle.”
Definitely. Iteration is at the core of any Experimentation culture. What role does a Customer Obsession play in testing, and the testing lifecycle?
There should be a continuous feedback loop, both qualitative and quantitative, throughout the testing lifecycle. What that means is that you’re collecting and analyzing feedback from users to initially identify customer pain points. Then you can create problem statements and benefit solutions and do rapid testing to get immediate feedback from users to ensure you are solving the right problem?—?one of the feelings is seeing a test fail because you’re solving the wrong problem. Even during a live test, we have heatmaps/scroll maps and polls set up to see how users are reacting to the experiment and to determine if we results are coming in as expected. Sometimes, we see that by solving the first problem, we have unintentionally created a new one. This is an opportunity for follow-up testing.
How do you personally approach uncovering customer pain-points?
Other than having direct feedback from customers, I love spending time digging through analytics, session recordings, heatmaps, etc. I will make notes of interesting trends or anomalies and see if I can support those from additional sources.
“One way to know if you’re moving too quickly is if you don’t have a solid answer as to why you are testing.”
One of the hardest things to do is to convince clients to embrace Experimentation. However, once they are on board, how do you approach test velocity? How often should people experiment and when do you know you are going too fast?
Testing should be viewed as a tool that people can use to validate hypotheses, not a task to complete. That being said, people should test as often as possible given environmental/internal constraints. One way to know if you’re moving too quickly is if you don’t have a solid answer as to why you are testing. I prefer to go slow and be confident in the learning opportunity of the test, regardless of the outcome, than too fast and not learn anything.
“At McAfee, we have daily stand-ups where we give updates on the status of our tests to the rest of the team.”
Related to the subject of test velocity, how do you approach sharing learnings?—?especially when you start having a good number of tests going on at the same time?
I’m finding that this is a challenge for a lot of testing organizations. At McAfee, we have daily stand-ups where we give updates on the status of our tests to the rest of the team. It’s also a place where we discuss our results from tests and get feedback from each other. I find that discussing results among a group helps to share the learnings as well as give different perspectives on the data.
“We have worked very closely with our data science teams to define proxy KPIs or metrics that are known to improve retention rate.”
You recently became the testing manager of retention at McAfee. How do you run tests on retention considering retention takes a while to measure?
It can definitely be a challenge! As McAfee is a subscription-based business, it means that it can take a year or two to see the results of some of our tests on retention rates. We have worked very closely with our data science teams to define proxy KPIs or metrics that are known to improve retention rates. If I can increase a proxy metric by x%, then I know I will have a y% impact on my retention rate 6 months from now.
For those struggling to understand the value of experimentation, what words of encouragement do you have for them?
I find that experimentation can be an overwhelming concept for some people. It blends different disciplines together such as statistics, data science, design, development, QA into one process. Once the process is broken up into its respective parts, people have a better understanding of the ultimate goal for the test and the A behind why we are testing. Also, explaining that we test to prevent wasting time/resources/money on a bad idea helps give context as well. Not sure how encouraging those words are.
I understand that you volunteer at the Junior League of Dallas. Could you tell us a bit about that cause?
The Junior League of Dallas is an amazing organization that brings women together throughout the country to support local charitable organizations. I had the pleasure of working with the Senior Source?—?an organization that improves the quality of life for ageing adults, Habitat for Humanity, and Equest?—?an equine therapy organization that supports people with disabilities. I highly recommend this organization to any woman interested in supporting their community.
Finally, it’s time for the Lighting round!
Pick: Bayesian vs Frequentist?
Unfortunately, this is an “it depends” answer
The Cowboys vs the Mavs?
My fiance is an avid basketball fan, so by default, I am as well :)
Optimizely vs. Google Optimize?
Optimizely!
Marketing vs. Finance?
Marketing, although I still find finance fascinating. I’m currently reading Micheal Lewis’ Flash Boys and its incredible!
I might just have to check that out! Claire, thank you!