AI-Generated Summary
Simon Girardin emphasizes the importance of learning from every test, capturing value by reviewing data, segmenting results, and analyzing user behavior through session recordings and heatmaps.
Video
Audio
AI-Generated Transcript
Simon Girardin 0:00
Think about it with your team. If you spend most of your time reporting on numbers, then most of your stakeholders speak that language. uplift the crease revenue. Stakeholders speak that language very clearly and very plainly when they maybe not, don’t speak as well as you is, what happened. How do you interpret those results? So what do we learn? And where do we go next is the important part of the test.
Tracy Laranjo 0:23
Hey, experiment nation, its Tracy Lorenzo. You are going to love today’s episode with Simon Girardin. He is a CRO manager at conversion advocate, and you may recognize him from his thought provoking posts on LinkedIn. In this episode, we’re going to talk about something that every experimenter absolutely must know, which is how to learn from past tasks. Then we’ll go into the roadblocks that get in the way of iterative testing, and then what you can do before an experiment even goes live to ensure that the learnings will be valuable.
Thank you so much for listening. And let’s get right into it. Hey, experiment nation. It is your hostess with the mostest Tracy Laranjo. And
I’m back here with another episode with someone who I love. I love their content. It’s always so relevant. It’s so clear. I just I’m so excited to have them on the show on the show today is Simon Girardin from ConversionAdvocates. And today we’re going to be talking a lot about how to learn from past tests. And I think Simon’s the perfect person to talk about this. So Simon, welcome to the show. Hi, Tracy,
Simon Girardin 1:26
I’m so happy to be here. We’ve got another talk about and I’m, as you said, I’m CRO program manager over at ConversionAdvocates, I am a huge advocate of testing strategically having an intent and purpose behind what we do. And being customer focused, which means running research and basing every experiment based on some customer or market research insights. And that’s the best way to generate ROI from your sterile programs as the best way to capture learnings as well, which is our topic for today. I’m really excited to dive into this topic, because everyone talks about you should learn from your tests. And everyone talks about you should you should run research in your testing program. But the topic of audio actually learn from the test itself is not as much talked about. So I’m really excited to dive into it. Totally, I
Tracy Laranjo 2:13
think it’s, it’s so easy for people to just talk about how to get to a test how to interpret your test, but then it kind of stops from there, to the point where as a new optimizer, you may be left thinking okay, well, what do I do from here? What, what next? So, definitely want to talk to you about this, because I think you’ve written a lot of great content about this on LinkedIn. And I guess my first big question for you. And something that a lot of newer optimizers might be thinking as well is, why is it so important to learn from past test results, instead of just moving on to the next thing? Yeah,
Simon Girardin 2:49
for sure. And just kind of preface this, the feedback that I’m getting in the industry is there’s like two different degrees of First, some people were aware of that, yeah, we need to learn from tests, fewer people were even aware of, how do we do it? And what’s the first step and what’s the output of it. So that’s why it’s such a fascinating topic. And the core reason why it’s important is multi fold. So on one end, when you build a first experiment, and you create a follow up iterations and variations, what it means is you gain efficiency over time, because if you build a content blog, for example, and then you want to iterate on the on the content itself, most of the hard work is done on the first attempt. So every further experiments is much more efficient, efficient for your full tech team. So that means you’re getting much more bang for your bucks, whether you’re in house or with an agency. That’s one thing, you also gain insights from every single test that you run. So if you say on average, a team spends maybe around six hours working on a task from test plan all the way to calling it and doing results. And if you only spend a couple of minutes reviewing it, where are you capturing the value from the test itself. So that’s where I think really important for this step where you should sit down and focus and mute every subtraction that exists, just kind of like, think about your tests and look at the results. And there’s a lot of stuff that you need to do to be able to capture and create all that value for the organization. And first thing is when you’re looking at the data, you need strong analytics to be able to look at, you know, the metrics that uplift that crease, you should never report on a metric that has 82% probability to be best or statistical significance. You never do that it’s a flood result and we don’t go further about it. But everything that’s that’s saved and everything that is not you should segment as well. So maybe for your mobile visitors or maybe you have a different segments, and sometimes it’s also a cohort, mobile. And you visitors together are very interesting behaviors that’s different from when he’s so you want to spend these different trends and try to understand what happened for these people. You also want to go into your UX research. So okay, I’ve run a test. We spent multiple hours setting in I’m running it. Now I got some results that I know what happened you sheep’s spend some time looking at session recordings and eat maps, maybe it’s nothing to gain. But maybe you’ll see something that that’s, you know, it’s appears directly to your eyes. And you know what you take action from it. So, you know, there’s the analytics part. And then there’s a kind of UX research, post test UX research, you shouldn’t spend, you know, multiple hours on that, but just kind of a quick health check, hey, I have an intuition to look at this or that or mobile was really interesting. Let’s look at a couple of session recordings. Those things are important, because then you get like a full understanding of what happens on this page and all do our users behave there. And then you just kind of started the process. Next is you have to interpret that data. This is the first part because I getting to the United can be challenging. If you don’t have a data scientist, it’s gonna be challenging, I really recommend you work with anexpert data scientists. I’m the type of person as a program manager who asks questions, what about this? What about that? Can you tell me if this type of user and behavior similar to what I’m expecting, I’m not able to pull that data, so I have someone helping me. So then when you have that need to interpret the data, this stage is basically just trying to put into words why you’re seeing what happened, why is not what I expected. And what’s important is that you don’t need to tell the truth here, like you don’t need to own it is what I mean, you can just say I have an hypothesis, or my suspicion is that or perhaps this specific behavior happened because of a reason. tied to your research and tied to why you build this experiment, you have an hypothesis that basically is a solution to a problem you’ve identified. So when you take all of this together, you create a very intentional process of taking the initial data points, taking some follow up extra data, and then creating a very rich explanation of what happened and what we’ve observed. So even if you have an inconclusive test, or a losing test, then suddenly the value is completely different. Because your team doesn’t only see our place and revenue
Tracy Laranjo 7:07
100%. And you just spoke to something that I think every optimizer encounters, which is you look at a result. And the numbers show you okay, it was a loss, but just going into your tests and approaching it with a learning mindset, it’s never a loss, if you actually learn something that’s going to take you to the next step, or that is forming the narrative that will lead you to that, that big learning and that big win. This
Rommil Santiago 7:35
is Rommil Santiago from experiment nation. Every week we share interviews with and conference sessions by our favorite conversion rate optimizers from around the world. So if you liked this video, smash that like button and consider subscribing it helps us a bunch. Now back to the episode.
Tracy Laranjo 7:48
Do you ever find that clients will push back on iterating on a test and they just want to move on to the next big thing? For
Simon Girardin 7:57
sure that happens. And you just touched on Tesla says one thing that’s important as you’re going through that process, and I do that on a call with my clients, because noted a feeling of having this conversation in real time, I feel adds another value, I’m able to gauge them and understand every stakeholder where they stand out. And if they seem disengaged, or super engaged with when we’re talking about something important is every test lost specifically that you’ve prevented something that if you were to just ship on production would have heard the site. So it’s important to articulate that. So if you have this aspect of a we’ve prevented us, and then here’s what we learned. And we’ve discovered some sort of behaviors and trends that are interesting, then multiple things can happen is one I like to say when we did is we shifted customers in a certain direction and we created a loss. If we’re able to do it 180 degree, maybe the next test is going to be a win. But now we’ve built on learnings and we’ve understood that we touched on a lever just on the wrong side of it. So if we were to just flip it, maybe this time, it’s going to be a win. So for sure, there’s some sort of pushback. But oh, you approach to losing results before you even kind of let your stakeholders chime in, I think has a real impact on how they perceive it, and how much of their attention is going to be spent on Oh God, we lost versus Yeah. Okay. There’s some follow up potential here. And this was a valuable experiment.
Tracy Laranjo 9:16
Absolutely. Do you find that when you go through this process of iterating, that there’s a certain number of iterations. And once you reach that point, you say, Okay, it’s time to abandon this and move on. Like what what is kind of your judgment criteria for knowing when it is time to stop iterating on one specific hypothesis? That
Simon Girardin 9:40
is a great question. And I will say it is an intentional process, but I don’t think there’s a clear guideline or guardrails that would apply to every single experiment. So there’s a couple of things that come into this conversation of one our highly prioritized is this hypothesis. Every quarter you shouldn’t be looking back at your testing metrics and maybe you reprioritize things I don’t think every team does that. And they should, specifically one of our criteria on our prior matrix is, are we testing against the key KPI of this quarter, every quarter, it would be one key metric that we try to increase, because that allows us to focus on it and make sure that we gain traction. Every quarter, you might pivot. So maybe this this test was good for the last quarter. But now we want to pivot to something else. So first, is it still highly prioritized? If yes, then let’s let’s keep looking at it. Second, is every follow up iteration, still teaching us something new, or we can have suddenly stuck in a rut, we’re shipping tests and like there’s no follow up actionable stuff that happens, no segments that are interesting, no traction traction that’s posted anywhere. Previously, I said, you should never report on a metric that’s 82% probability to be best. Don’t disregard it either. Like there’s an attraction that you’re seeing here, if you’re able to increase the effect of when you’re causing here, maybe suddenly, the next variant is a statistically significant result. So it’s important that you take all of that into account. But when there is no more learnings, there’s no more interesting data points from your segmentation and results analysis, then you’re starting to get into a moot point and the efficiency isn’t even worth anymore. Another thing is everyone has covered new issues through research that are suddenly more important or more pressing. If so then maybe it’s it warrants a pivot. So it really is an intentional process more than a clear guideline and like SOP that you can use in your business. But it’s just important to ask questions before you actually make your own mind, let’s say
Tracy Laranjo 11:33
totally, absolutely. And it also opens up the opportunity for different totally different hypotheses to like, you don’t have to just keep iterating on the same one. And yeah, that that is a huge value add that I’ve seen with with this approach. Did you want to add anything there? Yeah, absolutely.
Simon Girardin 11:50
And there’s, there’s like two degrees that you can look at every single test with your teams is one, do we align on the hypothesis? And second, do we align on the variation, and I think that this is Miss regarded often. But the difference is an hypothesis is there’s an issue, here’s what we think is going to solve it. Whereas the variations are all the various ways that you can solve the problem. So what’s important is that you can iterate on the hypothesis, you can iterate on the variations. And so sometimes you might be testing against the same hypothesis for six months, but you’ve pivoted multiple times in terms of what do you do in the variation on its treatment? That’s important, because if ever, I’m going to kind of do a parallel topic here, if ever you encountered pushback with test plan presentations and kind of getting approvals, one thing that’s really great as an advice is go to these two different levels and check where there’s a, there’s a lack of agreement and alignment with your teens depends on the hypothesis that either the problem is not important to your stakeholders, right? Or your proposed solution doesn’t sit with them. That’s a great point for you to start a conversation, think critically and then kind of find, where’s the middle ground? Or where should we go next, because maybe you just need to disregard where you were into. But then if you’re going to the second level of what we disagree on the variation, then you can work with your teams to actually build up what it would look like so that everyone is on board and happy with it. totals are referred to as quick pin on but I thought it was
Tracy Laranjo 13:13
no, it’s, it’s so relevant, because you can end up with a result. And it might have nothing to do with your hypothesis being wrong, it’s just that the treatment, or the approach to it was not correct. So it’s I think it is really important to call that out. Because it doesn’t necessarily mean you need to throw your hypothesis out, you just need to change the way you approached testing it. And
Simon Girardin 13:38
something that Jeremy Epperson says often is sometimes, you know, you might have the right data points, the correct hypothesis, and only be slightly off in your implementation. And you don’t find a win here. And if you abandon your test there, then you weren’t that close to actually finding a winner. So it’s it’s a lot of setting up your own right mental models and going into this, this journey of optimization and not just kind of one quick shot, and it’s a score totally. And support teams, you know, have multiple touch points before either the ball or the puck, if you’re in hockey goes to the mat, there’s multiple players, we’re going to touch it, then they’re going to interact. And they’re going to have these multiple touch points, but no sophisticated strategies. And so you need to think of your experimentation the same way. Maybe you need to do a couple of passes before it gets to discard go. Totally.
Tracy Laranjo 14:26
And you just You really made it easy for me to segue into this big question that I have. It’s the question that I always come across when I’m starting an experiment. And it’s what is the one thing that I need to do today, before this experiment even gets executed on before I even start designing it? What do I need to do today to make sure that when I look at this report at the end, I’m not scratching my head saying Did I actually learn anything from this? What would you say is the thing? It’s
Simon Girardin 14:56
not the thing, it’s actually an intentional process and you need to have in play is multiple things. So this process is actually sitting down. And when you create your test plan, what is most commonly known in our field and industry is, you know, you have like this URL targeting audience targeting, you have like the traffic split, and all these kind of technical assignments that you’re going to set up into a test. But if you stop there, you’re just getting a test shipped. That’s not really how you’re going to learn from it. So the first thing is, go back to your research. And, and basically, when you’re testing, ideally, you have multiple research methods that I’ll point to one same user ratio that you want to solve. So then go back to your hypothesis and think so based on all these insights that I’m getting from research, I see this issue and I am trying to provide that solution to solve it. It’s super important that you internalize this process and be very purposeful when you’re thinking about it. So next, what do you want to think is, what I’m doing is I am creating an experiment that’s going to help us decide if that is the right solution or not. So we kind of go back to the actual purpose of CRO and experimentation is making business decisions that are accurate and effective. So when you go back to all of these, these points, you also can think of what is the desired or expected outcome. So you should not you know, try and traffic any sort of settings of your experiment to achieve those results. But it’s important that you think of them. Specifically, if what you’re doing is shipping a new feature, and us expected or hope that it doesn’t suppress conversions from a different action on your site, then your expectation going into the test results is that if you find flat, flat, decreases and increases, it’s a good thing, because you haven’t suppressed anything. But if you were just going into the experiment, haven’t had the intentional thought, you might just call this inconclusive and move on. While in reality, you’ve confirmed that you created no harm. So this is an important part of the process, but it’s just based on how you call the test. So when you actually get into the results, and you analyze the segments, you can think back to your research and say, there were this and that point of customer insights or market research insights that pointed to do this social. And so if you were to draw, you know, some sort of a chart, and you had a bunch of lines that were kind of meeting at a certain point where you have your problem in like POTUS, then your intent, following that in going into your desk results as you’re trying to see if you follow that in a direction that was intended or not. And having this process takes time. That’s why I’m saying, if your team spent something like six hours working on a test, you should sit down and spend as much time as you need to, you know, formalize all of this, write it down, make it making it, you know, formal and share it to your teams. I said a lot of stuff to diagnose you and
Tracy Laranjo 17:47
I want to actually dive a bit deeper into that, when you are in the process of looking at test results with your team. What does that look like? Does it look like a call that you set up? Or is this just separate analyses going on, really paint that picture? For me, it’s
Simon Girardin 18:04
a multi step process. To be honest, the first part is, you know, statistical analysis and just kind of results reporting. So we make sure we report on all the key metrics. And we’ll also look at some core segments that are usually relevant for our clients. You know, one thing we’re working with, we’re always looking at you versus returning visitors as the most important differentiation for their site. Others are like, they have 75% of their audience using their mobile phone. So mobile is always a segment that we look into before almost anything else. That’s the first part. There’s a CIO analysts work on creating those audits, reporting dashboards and reports. Then there’s our product manager, which brought I am in to will kind of go in and try and and look into these results and formulate a recommendation in terms of what have we learned whenever the next steps. But we also do is once a week, our team meets with everyone, designers, developers, CRO people. And then we look at these results. And we try to answer these questions like, are we surprised with these results? Where do we go next? Do you see any variations that we could follow up this with, it’s important that you include all of these people with various backgrounds, super important. If you have some stakeholders where executive level or director level, invite them as well, because everyone has a unique perspective, and their baggage is not the same as yours. And it adds a lot of sophistication and bridges, gaps and blind spots that everyone has. And so when we could root this team together, what I like to say is we turn one test results into 10 new hypotheses or variations. So that creates a kind of exponential effect and value for the for the client team, but also for our own internal team, as everyone is maybe challenged, or is asked questions that are that maybe have not been thought of all it’s a multistage process. And let me wrap this up by quoting Manuel Acosta, who I really love when he says you should never create your own word. And so doing this thanks Your size as a team ensures that your effort and your your, when you discuss this back with your team and stakeholders, you have reliable results that are trustworthy. And that is important, because we’re basing important business decisions on that. Yeah. And
Tracy Laranjo 20:17
also, any result in which you learned something isn’t a bad result, it’s always, it’s always a good result if you’ve learned something, and if you’ve mitigated some sort of risk.
Simon Girardin 20:27
Absolutely. And so not part of our process of calling tests. But afterwards, there’s a formal process of presenting the results. And we do that over, you know, it’s a long process. If we’re on a call, we might spend five to 10 to 15 minutes on a single test. And the reason for that is we report on the numbers, and think about it with your team. If you spend most of your time reporting on numbers, then most of your stakeholders speak that language, uplift the crease revenue, stakeholders speak that language very clearly and very plainly, what they maybe not, don’t speak as well as you is, what happened. How do you interpret those results? So what do we learn? And where do we go next is the important part of the test. And so the numerical values we report on rather quickly, hey, we’ve seen this and that shift, by the way, there was that segment that was super interesting. And that monitoring metric was completely off the books. And we didn’t expect that. Let me just kind of make a segment here, you should have multiple layers of metrics, your primary KPI that you’re trying to directly affect, you should have secondary KPIs which are important metrics, just not the one that you’re focusing on, you should have a bunch of monitoring stuff. So like CTA, clicks, pageviews, and all these kind of engagement metrics, like bounce rate to revisit rate time on page, like, none of these should be your primary component for what you’d call a test for but they all are important, and you need to look at this holistically. And so when we’re with the client team, we don’t spend that much time on all of these things. But we’ll report on all the stuff that we found interesting.
Tracy Laranjo 21:57
From the run up, yeah, sorry, no, sorry. Go on ahead. I’m just like, Oh, I love talking about metrics. Yeah.
Simon Girardin 22:03
And so the next step is we visually create a full slide on whatever you’ve actually learned from this test. And we’ll spend time kind of sharing our interpretation. And as I said before, you don’t need to own a truth here. You just need to say, maybe there’s a follow up hypothesis, maybe we have unanswered questions. This test made us realize that we have more pressure to answer that’s super exciting. It is in it changes our stakeholders see the test because this the as soon as you do that, it stops being a finite game, and it becomes infinite. And so what have we learned? user behavior shifts, expectations that were met, or hung met, things that we validated or invalidated? It’s also important that you formalize, what is the test outcome? And also, what’s the recommendation? Maybe it’s a test win, but you don’t recommend an implementation for some reason. Or maybe, you know, I just had the test last week, where we found a winner for the mobile segments specifically, but for this client don’t really have a differentiated experience on both device types. So before we go and implement that increased tech depth, we actually want to do more variations were able to see, is that something that will monitor a trend over time with multiple tests? Or was that a one off? Because it will really inform if we want to implement that when to production? So that’s that process is important of what’s the outcome? And what is the recommendation next? And it’s totally okay to say this test is or when we should maintain control for now and do more iterations. Let’s keep it in our back pocket. It was really good. Yeah.
Tracy Laranjo 23:30
Okay. You said something there, I promise is gonna be my last. My last question about the topic. So there are situations in which you actually do have technically a win, but it’s within one segment, and you choose not to actually implement is that because you know, you want to keep the experience consistent between devices, and then you keep testing until you find that treatment that’s going to be significant on all devices? Can you talk a bit more about that? What happens if you encounter that situation?
Simon Girardin 24:03
So with segmentation, the biggest risk that you run into and we we rarely run segment in tests when we started a CRO program? To be clear, we generally start testing globally. And we will only test it on segments when we have validation. So for example, if we had five or six different experiments that are pointed towards mobile and desktop performing opposite of ways, then we’ll start designing experiments specifically for a device type. But before that, the danger with segmentation is that if you found a win for a specific segment, and you go to implement that, then your experience is differentiated on desktop and mobile. That’s not the issue. The issue is that in the backend of the site, you have differentiated experiences, you have a larger codebase. Now you have a bunch of challenges for the engineering team to manage. And the more that you test, the more you segment, if you start doing that really frequently, the engineering team is going to be really confused and it’s going to be challenging to upkeep all the site. So that’s why We might hold off on an implementation of production in a way that we get multiple validating experiments that tell us, okay, we need to go that route. Back to the core purpose of experimentation, we’re trying to gather data to make efficient decisions for the business. So if you just call it when and permitted life and then created a tech debt, maybe that’s not even an efficient decision. Totally.
Tracy Laranjo 25:25
I’m so glad you said that. It’s, it’s easy to get wrapped up in the the the dopamine hit, you get a encountering a win. And then in the, in the long term, it’s actually not really a win if the rest of the team who’s responsible for implementing it is now stuck in this this tech debt situation where they have to manage all these inconsistencies. And I’m so glad you mentioned that you said so much. That’s so helpful, I think to anyone who encounters these very specific situations after a test wraps up. I’m not I’m not new to this, but I still learned things. So I really appreciate that. Simon, thank you so much. I have one big question for you. It’s going to be really difficult. Who should we interview next?
Simon Girardin 26:16
Oh, wow, that is a great question. If you want to stay exactly on the topic, I can’t recommend enough Shiva who’s always talking about testing to learn. That would be really, really relevant. But otherwise, I would recommend maybe talking with manual the Casta. I just had a conversation with him last week. And it was super insightful, because we talked about should you have exact level buy in into your program? And what happens if you don’t, and we were both of the school of thought that you actually need the exact level of buy in, you need the desire interest into experimentation to come from the top level leadership, if you want it to be a success for you organizationally?
Tracy Laranjo 26:53
Totally, that’s totally a good topic. So we’ll definitely reach out to Manuel. Also Shiva is my she was a good friend of mine. So I’m just going to give him a nice nudge that he was mentioned here. So thanks, for sure. Yeah, I know you’re watching this, you better be watching this. Yeah. And then I guess, lastly, do you have anything going on that you want our listeners to know about and check out?
Simon Girardin 27:17
Not necessarily, you know, you can follow me on LinkedIn, I post content there every day, my kind of direction and purpose there is sharing stuff about how to actually test strategically, other kind of topics that we’ve talked about, I post about them, I share also case studies and stories of what happened in the day to day work. I’m really excited about working with multiple stakeholders. And I’m also really passionate about the challenges that these teams have, you know, fast growth startups experience and helping them kind of grow through these rapid pace and also kind of chiming experimentation into their growth process as the entire organization. I’m super passionate about love that. So just follow me on LinkedIn. And that’s all.
Tracy Laranjo 27:59
I second that do follow Simon on LinkedIn. So thank you so much, Simon, for speaking with us today. And thank you to our listeners for listening. I hope you learned a lot. I did. And I’m going to keep following your content. I always love it. So thank you so much, Simon, and I hope you have a great rest of your journey and experimentation.
Simon Girardin 28:19
Thank you Tracy was great. And thanks to experiment nation that was always happy to be on this podcast and share knowledge with everyone. Happy to
Rommil Santiago 28:27
have you. This is Rommil Santiago from experiment nation. Every week we share interviews with and conference sessions by our favorite conversion rate optimizers from around the world. So if you liked this video, smash that like button and consider subscribing. It helps us a bunch
If you liked this post, sign up for Experiment Nation’s newsletter to receive more great interviews like this, memes, editorials, and conference sessions in your inbox: https://bit.ly/3HOKCTK
Connect with Experimenters from around the world
We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.
Advertisment
Categories
- Adventures in Experimentation (9)
- Analysis (2)
- Announcements (92)
- Ask the Experiment Nation (11)
- Certified CROs (193)
- Community (2)
- Conversation Starters (2)
- CR-No (5)
- CRO Handbook (4)
- CRO Memes (18)
- Experiment Nation: The Conference RELOADED (41)
- Experimentation from around the world (2)
- For LinkedIn (165)
- Frameworks (2)
- Growth (14)
- ICYMI (2)
- Interviews with Experimenters (210)
- Management (1)
- Marketing (11)
- Opinion (8)
- Podcasts with Experimenters (15)
- Point/Counterpoint (1)
- Product Experimentation (5)
- Product Management (9)
- Profile (9)
- Question of the week (5)
- Sponsored by Experiment Nation (1)
- The Craft of Experimentation (1)
- The Cultures of Experimentation (1)
- The weekly buzz (13)
- The year in review (1)
- Uncategorized (352)
- Weekly Newsletter (184)
- Where I Started (2)
Related posts:
- About our new Shorts series: CRO TMI - November 8, 2024
- Navigating App Optimization with Ekaterina (Shpadareva) Gamsriegler - October 18, 2024
- Building Your CRO Brand with Tracy Laranjo - October 11, 2024