CR-No: How to manage stakeholders and HiPPOs

Home / CR-No / CR-No: How to manage stakeholders and HiPPOs

Rommil Santiago 0:00
From Experiment Nation, I’m Rommil Santiago, and this is CR-No. CR-No is a series that pulls back the curtain on the conversion rate optimisation industry. Listen in as a panel of experienced CRO veterans talk about some of the joy and a lot of the pains of our industry.

Shiva Manjunath 0:29
Hey everyone, welcome to the CR-No podcast. Today we’re going to talk about a fun topic, shareholders. What do we do with them when they don’t agree with our results? How do we get ahead of it? Joining us today on the panel. We have awesome CRO folks, we have Eddie.

Eddie Aguilar 0:43
Hi, I’m Eddie Aguilar . I’m a optimizer for the past 10 years or so. I’ve been doing back end programming and front end programming for more than 10 years but eventually made my way over into optimization just because I didn’t think there was enough good experiences out there.

Shiva Manjunath 1:07
We have Siobhan

Siobhan Solberg 1:08
Thank you. I’m Siobhan I am the founder of a boutique optimization agency called Raze. We focus mainly on eCommerce Shopify stores, and optimize their customer experience all the way through retention. In my spare time, I optimize everything from walking the dogs to my Iron Man training and annoy my partner while doing so.

Shiva Manjunath 1:32
Oh, that’s a good one. I’ll myself been training for a Spartan Race. So we’ll have to compare some times and workouts together. Kenya how about you?

Kenya Davis 1:41
Hello, I’m Kenya Davis. I am a senior manager of decision science. I kind of fell into experimentation. I started at Lowe’s and built a team up there. And now I’m at Evolytics, you know, working across multiple companies. And you know, my passion is just in proving things wrong. And exploring and learning. So. Yeah.

Shiva Manjunath 2:12
And then last but not least we have me. I’m a program manager over at Gartner. And I do CRO for a bunch of different brands. And that’s my fun life. All right. So why don’t we jump into it? I think one of the things for me, that’s a personal gripe is dealing with HiPPOs. And I think there’s a lot of fun, creative strategies for potentially communicating results for those shareholders where they might see the data. And generally speaking, you would think people who would see data would be like, okay, you know, I see the data, I saw the experiment, let’s move forward with this. But what seems like sometimes can be a fantasy often is not reality, when users have opinions. senior executives have brand things that they’d like to follow that contradicts test data. So let’s just start the discussion about HiPPOs. And what are some strategies you guys might have for kind of dealing with HiPPOs?

Eddie Aguilar 3:09
I have a question for everyone. Um, have you guys ever fired a client because of, they didn’t accept data or any of the results, they just essentially loved the way their site looked prior to experimentation.

Siobhan Solberg 3:27
For sure. I’ve fired numerous clients. Because of this. I wouldn’t say it’s because of a one off, it’s more than it when it becomes a pattern, that you realize that it’s just not a good fit, because they hired you because it’s a trend to hire you not because they really want to optimize their customer experience and their website in general.

Eddie Aguilar 3:51
Yeah, I i’ve personally have had that experience too, let’s just say, named Morgan. But um, what happened? We presented results for a homepage test. And it was to the CEO and some other VPs that were in the call. The CEO was just not having it with the data. He just preferred how the homepage looked prior to experimenting, even though we drove revenue, we drove conversion rate, it, it just didn’t matter to him. And it just seems like that client didn’t end up being a fit either. And that wasn’t the first time he had done this either. It was multiple times where he just didn’t accept the results. And we learned that he just wasn’t going to be helpful in trying to drive our optimization program. So

Shiva Manjunath 4:54
I’m curious to dig into that. Do you have any insights as to potentially why he didn’t love the results? Like was it just the look and the feel? Or was it like he he didn’t like the hypothesis like what in particular was he not resonating towards with this test?

Eddie Aguilar 5:10
It, it was mainly the design. He just really loved the prior design to the homepage and was not interested at all in the new design. We tried to work with him and give him some designs around his old homepage, you know, so it has the same look and feel type situation. But at the end of the day, he just wanted to stick to the old homepage, which was not converting as well. My thought is just he helped design the old homepage and just was probably not accepting the results because of it.

Shiva Manjunath 5:51
Yeah, it’s the worst when it’s your baby, and your brainchild and someone comes in says, Hey, your baby’s ugly, let’s use a prettier baby.

Eddie Aguilar 6:01
Yeah, I think that’s, that could be the case, especially if I’m certain that the CEO was obviously very involved with the design originally. So

Shiva Manjunath 6:12
it looks like Kenya has something to say. Kenya.

Kenya Davis 6:14
Um, I think that it’s a very common thing, I haven’t been in agency life for a long time, so I don’t get the luxury of firing anyone. However, in my experience, on the on the client side, there’s often times where I’m dealing with stakeholders who just refuse to accept results, or they’re really pushing a test idea, based off of an opinion and not a true hypothesis with data. And when that happens, I’ve found that since I can’t fire them, I would typically add like a section and in my team would for disclaimers. And those would really call out like it is ill advised to do XYZ, because it will impact the business in this way or that way. Um, and that way, it kind of puts it on them. And eventually I found that, you know, the leaders above them will see, you know, why do you want to keep running garbage tests. And it kind of protects us and what all of us are meant to do, we’re here to help you explore and understand things with a level of statistical significance with it. And your personal bias is not statistically significant. So it it’s a really tough situation to be in. But I find that it’s an actually, it’s actually a common mindset are much amongst stakeholders, they just know, especially if they’ve been in their field for a while. They really get married and committed to an idea. And I just, I guess you can look at it as flipping it on them. Like, I’ve already stated my piece. And everyone knows what the truth is, if you choose to do anything, otherwise, they will also know that that was something you chose to do.

Shiva Manjunath 8:11
Yeah, and Kenya, I’ve kind of been in a similar situation to where I don’t have a client, necessarily, my client is just another co-worker that has that opinion and has the authority to kind of shoot down a test idea, or test winner, even if they don’t like or love the end result, which is more money, technically, but theoretically, worse design in their opinions. But one of the things that I think is important, and like I kind of mentioned in previous podcasts, it’s just like there’s there’s a level of politics that you still have to play. And it’s, it’s not coming at this person with: Here’s the test idea, and you’re wrong, because that’s kind of competitive, right? It’s not just saying you’re wrong. It’s not just saying, here’s the data, we need to move forward with this. But it’s trying to be human sometimes and take a couple steps back and be like, well, let’s talk about this. Like, what, what specifically don’t you like about this? Is it? Is it the design? Is it? Is it kind of like what Eddie was saying where it’s your it’s your brainchild, so like taking a step back and having that communication to figure out and dissect exactly what it is they don’t like, could possibly be a really solid solution to then say, Okay, well, maybe we’re not going to take this exact design in this current case. But let’s figure out what hesitations you have and try and work with you to come up with a design that we both like that takes possibly some of the winning variation winning elements in this particular test, but also something that you like, so we can kind of come up with a winner that you’re actually comfortable rolling out to

Kenya Davis 9:41
Have any of you set up measurement plans to like set expectations? So to give an example it includes like test details, market research, technical setup and strategy images and then sections where it’s like, what are you going to do? If it wins? What are you going to do if it loses? Does anyone set that up? Do you feel like that helps have that conversation? Or you know, are you so slammed that typically you don’t have time to do it?

Rommil Santiago 0:00
From Experiment Nation, I’m Rommil Santiago, and this is CR-No. CR-No is a series that pulls back the curtain on the conversion rate optimisation industry. Listen in as a panel of experienced CRO veterans talk about some of the joy and a lot of the pains of our industry.

Shiva Manjunath 0:29
Hey everyone, welcome to the CR-No podcast. Today we’re going to talk about a fun topic, shareholders. What do we do with them when they don’t agree with our results? How do we get ahead of it? Joining us today on the panel. We have awesome CRO folks, we have Eddie.

Eddie Aguilar 0:43
Hi, I’m Eddie Aguilar . I’m a optimizer for the past 10 years or so. I’ve been doing back end programming and front end programming for more than 10 years but eventually made my way over into optimization just because I didn’t think there was enough good experiences out there.

Shiva Manjunath 1:07
We have Siobhan

Siobhan Solberg 1:08
Thank you. I’m Siobhan I am the founder of a boutique optimization agency called Raze. We focus mainly on eCommerce Shopify stores, and optimize their customer experience all the way through retention. In my spare time, I optimize everything from walking the dogs to my Iron Man training and annoy my partner while doing so.

Shiva Manjunath 1:32
Oh, that’s a good one. I’ll myself been training for a Spartan Race. So we’ll have to compare some times and workouts together. Kenya how about you?

Kenya Davis 1:41
Hello, I’m Kenya Davis. I am a senior manager of decision science. I kind of fell into experimentation. I started at Lowe’s and built a team up there. And now I’m at Evolytics, you know, working across multiple companies. And you know, my passion is just in proving things wrong. And exploring and learning. So. Yeah.

Shiva Manjunath 2:12
And then last but not least we have me. I’m a program manager over at Gartner. And I do CRO for a bunch of different brands. And that’s my fun life. All right. So why don’t we jump into it? I think one of the things for me, that’s a personal gripe is dealing with HiPPOs. And I think there’s a lot of fun, creative strategies for potentially communicating results for those shareholders where they might see the data. And generally speaking, you would think people who would see data would be like, okay, you know, I see the data, I saw the experiment, let’s move forward with this. But what seems like sometimes can be a fantasy often is not reality, when users have opinions. senior executives have brand things that they’d like to follow that contradicts test data. So let’s just start the discussion about HiPPOs. And what are some strategies you guys might have for kind of dealing with HiPPOs?

Eddie Aguilar 3:09
I have a question for everyone. Um, have you guys ever fired a client because of, they didn’t accept data or any of the results, they just essentially loved the way their site looked prior to experimentation.

Siobhan Solberg 3:27
For sure. I’ve fired numerous clients. Because of this. I wouldn’t say it’s because of a one off, it’s more than it when it becomes a pattern, that you realize that it’s just not a good fit, because they hired you because it’s a trend to hire you not because they really want to optimize their customer experience and their website in general.

Eddie Aguilar 3:51
Yeah, I i’ve personally have had that experience too, let’s just say, named Morgan. But um, what happened? We presented results for a homepage test. And it was to the CEO and some other VPs that were in the call. The CEO was just not having it with the data. He just preferred how the homepage looked prior to experimenting, even though we drove revenue, we drove conversion rate, it, it just didn’t matter to him. And it just seems like that client didn’t end up being a fit either. And that wasn’t the first time he had done this either. It was multiple times where he just didn’t accept the results. And we learned that he just wasn’t going to be helpful in trying to drive our optimization program. So

Shiva Manjunath 4:54
I’m curious to dig into that. Do you have any insights as to potentially why he didn’t love the results? Like was it just the look and the feel? Or was it like he he didn’t like the hypothesis like what in particular was he not resonating towards with this test?

Eddie Aguilar 5:10
It, it was mainly the design. He just really loved the prior design to the homepage and was not interested at all in the new design. We tried to work with him and give him some designs around his old homepage, you know, so it has the same look and feel type situation. But at the end of the day, he just wanted to stick to the old homepage, which was not converting as well. My thought is just he helped design the old homepage and just was probably not accepting the results because of it.

Shiva Manjunath 5:51
Yeah, it’s the worst when it’s your baby, and your brainchild and someone comes in says, Hey, your baby’s ugly, let’s use a prettier baby.

Eddie Aguilar 6:01
Yeah, I think that’s, that could be the case, especially if I’m certain that the CEO was obviously very involved with the design originally. So

Shiva Manjunath 6:12
it looks like Kenya has something to say. Kenya.

Kenya Davis 6:14
Um, I think that it’s a very common thing, I haven’t been in agency life for a long time, so I don’t get the luxury of firing anyone. However, in my experience, on the on the client side, there’s often times where I’m dealing with stakeholders who just refuse to accept results, or they’re really pushing a test idea, based off of an opinion and not a true hypothesis with data. And when that happens, I’ve found that since I can’t fire them, I would typically add like a section and in my team would for disclaimers. And those would really call out like it is ill advised to do XYZ, because it will impact the business in this way or that way. Um, and that way, it kind of puts it on them. And eventually I found that, you know, the leaders above them will see, you know, why do you want to keep running garbage tests. And it kind of protects us and what all of us are meant to do, we’re here to help you explore and understand things with a level of statistical significance with it. And your personal bias is not statistically significant. So it it’s a really tough situation to be in. But I find that it’s an actually, it’s actually a common mindset are much amongst stakeholders, they just know, especially if they’ve been in their field for a while. They really get married and committed to an idea. And I just, I guess you can look at it as flipping it on them. Like, I’ve already stated my piece. And everyone knows what the truth is, if you choose to do anything, otherwise, they will also know that that was something you chose to do.

Shiva Manjunath 8:11
Yeah, and Kenya, I’ve kind of been in a similar situation to where I don’t have a client, necessarily, my client is just another co-worker that has that opinion and has the authority to kind of shoot down a test idea, or test winner, even if they don’t like or love the end result, which is more money, technically, but theoretically, worse design in their opinions. But one of the things that I think is important, and like I kind of mentioned in previous podcasts, it’s just like there’s there’s a level of politics that you still have to play. And it’s, it’s not coming at this person with: Here’s the test idea, and you’re wrong, because that’s kind of competitive, right? It’s not just saying you’re wrong. It’s not just saying, here’s the data, we need to move forward with this. But it’s trying to be human sometimes and take a couple steps back and be like, well, let’s talk about this. Like, what, what specifically don’t you like about this? Is it? Is it the design? Is it? Is it kind of like what Eddie was saying where it’s your it’s your brainchild, so like taking a step back and having that communication to figure out and dissect exactly what it is they don’t like, could possibly be a really solid solution to then say, Okay, well, maybe we’re not going to take this exact design in this current case. But let’s figure out what hesitations you have and try and work with you to come up with a design that we both like that takes possibly some of the winning variation winning elements in this particular test, but also something that you like, so we can kind of come up with a winner that you’re actually comfortable rolling out to

Kenya Davis 9:41
Have any of you set up measurement plans to like set expectations? So to give an example it includes like test details, market research, technical setup and strategy images and then sections where it’s like, what are you going to do? If it wins? What are you going to do if it loses? Does anyone set that up? Do you feel like that helps have that conversation? Or you know, are you so slammed that typically you don’t have time to do it?

“I quite enjoy when [clients] nitpick my tests because #1, it holds me accountable, and secondly, sometimes I start looking at the data slightly differently because of the questions they’ve asked.” – Siobhan Solberg

Siobhan Solberg 10:15
I think that’s exactly the way I tackle it. Because I really liked your idea Kenya disclaimers, because I hadn’t thought of that. And it’s a really cool way to put it on them. But the way I do tackle stakeholders that clearly have a say, but are not on board is exactly this way. I measure everything, I set it up, and I set up what we call, like a question information answer section, meaning we’re going to measure this, and depending on what we find out, we’re going to act this way or this way, meaning if it wins, we’re going to act this way if it loses this way. And it’s really helping me getting them involved in the process earlier on. And buying in at an earlier stage. Sometimes I’ll even go as far as including them and, not the design process, but once I’ve got a final mock up, I’ll send them an email. And I’ll say, here, this is what we’re working with. And I’ll let them have opinions, if they’re valid, I’ll incorporate them. And otherwise, I’ll let them know why it doesn’t work. And by the time we get to testing, they’re already so involved in that process that that has become their baby. And this is the way I’ve really worked through with clients who aren’t as bought into the process. And most of the ones I’ve done this with, and when I have taken the time to measure to set up as like a plan and, and buy them in. Now we have an amazing relationship. And they have a whole company culture of everyone coming up with ideas for testing,

Shiva Manjunath 12:02
Yeah, kind of have a similar mentality where instead of waiting until the test runs before getting the buy in from people, you try and nip that in the bud. And that’s where I preach collaboration a lot. Making sure that stakeholders who could potentially be the ones to say we don’t want to go forward with this test, get it in front of them earlier earlier earlier in the process, things like getting the design team involved early in the process, getting brand team, looping them in as early in the process, so that they know that this is a test you’re running. And that this there are potential results, whether it’s positive or negative, that doesn’t always stop people from having opinions post test, or maybe things change, you know, this stuff always does happen to me as well. But generally speaking, if you try and get ahead of it as much as you can, by looping them early earlier in the process, if it’s a test brief, if it’s just regular cadence of meetings to say, hey, by the way, we have this test coming down the pipe, this is a general design, this is how we plan on running it, I think that always helps mitigate that risk of having it be a test run that that wins in the end, but the first time they’re seeing it is a winner. Instead of having it that be the process, take it the other way and say show them the test before it even runs, make sure they’re good with it. And potentially getting that in writing. And then once it’s good to go into test eventually does when then you have that to reference and say, well, you agreed to this. So we’re gonna go ahead and move forward with it.

Eddie Aguilar 13:30
That’s part of the process that I usually take. There’s always a QA point where the client or the shareholder in the company needs to QA the experiment, because I want them to obviously be part of that process before an experiment launches, they’ve already agreed to it. So there’s, they should already be having a fresh set of eyes. Looking at it once it’s ready to go live. One thing I’ve noticed, though, is that, you know, it’s the opinion base is they’re expecting the original or their version to win. And that’s where I feel like a lot of issues start stemming up. It’s not setting expectations, sometimes for some of these shareholders or clients. Sometimes, I’ve seen other optimizers and I’ve done this myself too. Every once in a while I forget to just set expectations. You know, telling them the sample size or how long it may take to run the test or things like that. And it just you know, those are things that sometimes slip through the cracks. When you’re just moving at lightspeed for some of these items that you have. And it’s like Shiva said things change. Sometimes you have a hypothesis and you get new data and you have to like change your hypothesis in order to just be able to work with the new data that just came in from, like a different experience that you ran or something along the lines. And I always just feel like, you know, if you can just set the expectation for most of these clients and or shareholders, then it makes it much easier to be able to present them the results right after.

Shiva Manjunath 15:28
Yeah, agreed. I was gonna ask, I don’t know if you guys have had any fun stories about, like, people who are overly, what’s the word? People who are like overly, they’d like they look at the only particular results under a super microscope, because they don’t confirm their bias. Versus like, if something confirms their bias, they’ll give it like the least amount of attention and be like, well, I agree with this. So I’m not going to give it any, like second thought. I’ve personally dealt with a couple, a couple of tests, and a couple individuals where we’d all run a test. And like for one particular test, because they don’t agree with the results, it doesn’t fit their preconceived notions about how users behave on our site, they would ask like, 50 questions and microarray analyze, and be like, I don’t agree with these results, versus like, when I give them a test result that supports what they were looking for. They’d be like, Oh, this is great, let’s move forward with it and not like have that same level of scrutiny with the test results. Have you guys run into stuff like that before,

Eddie Aguilar 16:33
Just to give an idea, like, I’ve, I typically don’t like adding too many events or things to be able to report on, obviously, you don’t want too many things to be reported on. You want things that are just actionable items that are particular to the experiment itself. I mean, you can have like, three different CTAs on a page, I’m not going to report on all those three different CTAs if we only want one of those CTAs to be actionable, and then I think where it issue start coming up is that they start, the shareholders just start looking into, oh, how did the other two CTAs manage? Are they doing? Did they do better, they do worse, and then it starts bringing in like the level of detail from other actions that aren’t necessarily included in the experiment.

Kenya Davis 17:33
Yeah, I haven’t necessarily, or I feel like that’s just almost like normal with any stakeholder who’s not aware of like, everything that truly goes into experimentation. When they’re approaching, when they’re approaching AB testing, they’re coming to you because they feel like it’s, you’re going to safely set it up, execute and report on it. That doesn’t remove them from having any bias, I think going in, they’re always going to have this idea of what they thought was gonna work. And so them agreeing with his own results, and not having questions, just means that they’re like, okay, it’s, it’s validating them. And when the results don’t don’t yield in their favor favor, you know, they’re naturally going to say, Well, why not? Well, now, I’m curious, well, I don’t understand. So it’s not, I really don’t see it as a bad thing. If they’re curious, it just means maybe what you thought was a best practice in your head or something that was a no brainer is actually, you know, not true. So we have to reject that idea. And I’ve dealt with it many times with clients, and I had one where, you know, he would nitpick everything, every number only for things that didn’t yield, you know, in his, what he thought was in his favor. And in those moments, you know, it’s, I get it, I think that it’s just a matter of how, how aggressive and how far are they going with it, there’s just general curiosity. And then there’s just you, you’re not approaching AB testing in a way of learning, you’re, you’re really trying to drive only your agenda. And that, you know, that could just be either a conversation or, you know, can’t really change people that much, if that’s just how they are and, and how they’re going to approach it. I feel like that’s not really anything that you can, you know, change on their behalf.

Shiva Manjunath 19:35
And we can wrap up with one last comment from Shivani.

Siobhan Solberg 19:38
Yeah,I was just gonna say that I feel a lot of it is about learning how to educate our clients about the process. And really starting from the kickoff call, essentially, I tend to really explain the process I go through where they can have input where they don’t have input and I And I have a conversation about this and this conversation doesn’t stop at the beginning of our engagement, it keeps on going. So I quite enjoy when they nitpick my tests, because number one, it holds me accountable. And secondly, sometimes I start looking at the data slightly differently because of the questions they’ve asked. Or it brings up a whole other idea where I can go back to them and say, Oh, you know, that’s a good question. Let me look at the data, measure what we need to measure to put this on the list and test that specific idea. And then they back off quite quickly, because I’m letting them have a voice. And part of letting them have a voice makes them feel that they get to be part of the process. I mean, I take it so far that I build it into my prioritization model, meaning if the stakeholder requests a very specific test, it has some weight, you know, like Shiva said, there are politics at play. But with that whole process, I’ve educated them enough that currently I’m very, very lucky that all my clients, really enjoy looking at reports enjoy having a conversation about what the test is done. And I’ve learned throughout this process, how to present for each specific person, the data, you know, someone’s more visual, someone wants more detail, etc. So I think ultimately, it’s all about having this relationship. And once you nurture that relationship, you can have a conversation that’s healthier about all the test results.

Shiva Manjunath 21:36
Awesome. Cool. Well, I think that’ll just about wraps it up for Episode Three. Thanks for tuning in.

Rommil Santiago 21:44
Hi, again, it’s Rommil Santiago from Experiment Nation. We hope you enjoyed listening to this episode of CR-No. So what did we learn today, we learned that most experimenters will face hippos aka the highest paid person’s opinion, which will often feel disconnected with the results of an experiment. The best ways to handle this is to set up expectations from the very beginning, make them part of the process and make them aware of the consequences of ignoring the data. Everyone has biases and that’s okay. But it’s our jobs as experimenters to do our best to remove those biases from decision making. With that, I thank you for listening. And if you liked this episode, and you think we deserve it, consider subscribing and tell a friend about us. Thank you. Until next time,

Transcribed by https://otter.ai


Connect with Experimenters from around the world

We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.

Rommil Santiago