CR-No is a series that pulls back the curtain on the conversion rate optimisation industry, listen in as a panel of experienced CRO veterans talk about some of the joy and a lot of the pains of our industry.
Hi, everyone, welcome to another episode of CR-No, this episode’s topic is what slowing your CRO program down and how to make it go faster. On today’s panel, we have Shiva.
Shiva Manjunath 0:48
Hey, this is Shiva. I’m a program manager for optimization at Gartner.
Rommil Santiago 0:52
And we have Kenya.
Kenya Davis 0:54
Hello, I am the Associate Director of experimentation at Evolytics.
Rommil Santiago 0:59
That’s a promotion.
Kenya Davis 1:00
Shiva Manjunath 1:01
Rommil Santiago 1:06
Today, we have myself, Rommil. And I lead experimentation and personalization over at Loblaw Digital in Canada. All right, this is gonna be a very interesting topic. I feel as people who lead experimentation programs, we always wanted to go faster, but they never go as fast as we want. I’d love to hear from Shiva who came up with this topic on his view.
Shiva Manjunath 1:26
Yeah, I mean, one caveat, right is faster doesn’t always mean better and slower doesn’t always mean worse. Because depending on what you’re testing, you could be, you know, really taking the time based on like five iterations of a test to really nail a design down and QA and get it going. But you could also be spending a lot of time QAing and getting down to a particular test that you shouldn’t be investing all that time into. So there’s definite, there’s a definite balance of quality versus quantity here. But one of the things I’ve generally seen that slows the program down is kind of just a lot of, you know, fixation on the absolute perfect, perfect version and the winning version of a test, rather than, you know, to be blunt, going a little bit dirty, and quick and testing out concepts and being a little bit more nimble and agile, to kind of accelerate the learnings rather than kind of, you know, the pixel perfect, best winningest version with all the colors and all the elements fine tuned to where they need to be,
Rommil Santiago 2:29
You often hear people saying that they want to be proud of what they ship, so you kind of understand why they want good products to go out. But I was wondering what your thoughts on why is this fixation on perfection when it comes to testing?
Shiva Manjunath 2:39
I mean, I could easily say us as optimizers, we literally tests to strive into perfection, right? So there’s, there’s a definite, you know, we see a template and we say, All right, let’s do this to make it better, let’s do this to make it better. Let’s do this to make it better. But you know, I’ve interacted with a lot of designers who will also have that similar type of mindset, which is, you know, I mean, that’s why we experiment. That’s what we test. It’s, it’s in our core to create the best possible version. But sometimes you literally have to physically pull yourself out of this and say, is this core to our hypothesis? Is this core to what we want to learn more about? And can we get this out quickly? Especially if it’s starting into a brand new concept, like let’s say, you’ve just built a brand new, really cool, like, e commerce product page video that you want to start testing into? You know, instead of thinking like, Well, I think the best possible version will be in the middle of the page between this section and this section, because of X, Y, and Z range, select No, you know, you can try that. But it might be quicker and dirtier to just literally insert it as like a secondary image to the page and see if you get attention, or maybe inserted at the very top of the page. And, you know, maybe it’s not the cleanest UI, but you track the attention, you track the clicks, you track and segment the users to see if the more people click on it versus not and then go from there. And, you know, I think one of the common overlooked things is iterations, where you take a concept, keep it dirty, and then you iterate from there. It’s not building a brand new experiment from there, your iterations are always going to be quicker, theoretically. Yeah, so I think those are definitely big reasons why things are slowed down. It’s just kind of like an eye for perfection. And it’s like, let’s get the best version instead of like, let’s get the learningest version, if that makes sense.
Rommil Santiago 4:33
I like that. I like that you call it the learningest version. No, I think that actually makes sense. You we’re all about trying to generate learnings. So we should be optimizing for learning and not necessarily pixel perfect versions of things. Kenya. I’d love to hear your perspective.
Kenya Davis 4:48
Yeah. So this is one of my favorite areas to deal with with multiple clients. The goal is always to move faster and Shiva I’d agree that you know, sometimes the focus is on the most perfect version of the test when going in, it’s not a test, it’s to push out what they’ve already come up with. But there, there’s a lot of like, cracks within the process that I’ve realized slow down programs, one being the lack of validating tests before they go out. And although that that can slow things up, there’s so many ways of putting in checks and balances that can automate that process for you. Another is the time to analyze your tests. You know, there’s, there’s so many companies and products out there that have like those, like those don’t want to call them like a automated test rate. But that’s really what it is where it’s, you know, as this template that gives you this information at a faster rate, which comes with you knowing your business. Another area is documentation and understanding of optimization, all the layers that comes with experimentation, personalization, machine learning AI and knowing how to use those tools, education and reeducation tends to be the areas that slows down programs the most, because there’s so many test types to use to get to your end goal. And, you know, there’s a lack of understanding of how to apply them to the strategies. And that’s oftentimes turned into just, you know, long drawn out conversations or PowerPoints that are proving your point as to why you should use this type of test versus another. All of those, you know, kind of go into slowing down a program.
Rommil Santiago 6:52
I liked that you talked about the education re education part, because that’s something that kind of comes up in my world a lot. And from a different angle. At Loblaw Digital, we’re growing rather fast, I think we doubled our headcount in a year or two. So what’s happening is, it’s not necessarily re educating people, it’s training, all the new folks that keep coming in, or the replacements are people and it’s kind of like, okay, it’s not necessarily to resell them on experimentation itself, because they’ve joined and they know, well ahead of time that we focus on experimentation, but it’s more like, this is how we do it. Right? This is the process, this is who you talk to, they don’t know anyone, and we’re all running 100 miles an hour, and there’s like, come on, catch up.
Shiva Manjunath 7:38
Well, I was gonna, I mean, to add to what you’re saying, Rommil, it’s, you know, new people coming into a company also have their own particular way that our previous company did, you know, optimization that how they structured it. So, to your point, there’s no, I mean, there’s definitely wrong ways to do it. But within a lot of orgs, there’s definitely, you know, nuances to making an optimization fit what you need based on your staff based on your matrix of the org that you have, based on your KPIs, things like that. So it’s, it’s definitely in the education camp to let them know, this is what we’re doing. This is why we’re doing it, I’d also say there should be a level of malleability to adjust based on, you know, let’s say you have a product manager who was extremely CRO knowledgeable, because they did that in a previous life, you know, maybe you give them a little bit of the benefit of the doubt and use them more, leverage them more versus maybe someone who doesn’t quite understand CRO, like understands the principles of it, but doesn’t really know how to make an optimization program, run and hum. You know, maybe you kind of step in a little bit more and say, Alright, well, this is the way that we should be doing it because of these reasons.
Rommil Santiago 8:49
I like that you touched on the staffing piece, because that’s the point that I want to bring up, where for brand new programs, it’s, at least this is what I keep running into starting a new programming company. It’s like we believe in it, what do you need, and when you list out the resources that you’d like, it’s always a, I don’t know about that. I don’t think you need this many headcounts or this many developers or what have you. And you’re in this situation of while you got to prove yourself that you can bring in results with only having like 10% of what you’d love to have. And it’s kind of it’s hard to get faster when you want to have a lot of folks but you can’t get faster and leave unless you prove that you could do good things with the resources that you have. So that at least that’s that that’s what I run to a lot. I was wondering if you run into a lot of resourcing issues, Kenya
Kenya Davis 9:39
Oh, 100% 1,000% it’s the saddest story. I feel like you know, to your point. Everyone wants you to or every company wants you to prove the worth of the experimentation or the the CRO program. I’m with as little resources as possible. And they want to put most of it in the tool and let it Let it be the self. Yeah. You know, I’m like, you know, there’s, there’s, there’s a level of our job that has to be done by a person, you know? Yeah. And, and like, it’s, I find it odd that, you know, leadership will always strive for, you know, having high expectations for experimentation programs, but they’re not very quick to invest a lot of money into it, when it’s the, it’s really the foundation of how you are figuring out how much money you’re gonna make. So that teeter totter, it happens across, you know, all of our industries, I know that we all run into it. And I don’t know that’s an unfortunate predicament to be in. I think we’ve all been there.
Shiva Manjunath 10:52
Yeah. To add to that, I think people can oftentimes the CRO as a risk, and saying C-R-O, and not “CRO” gonna give you a little crap for that, Kenya. And yeah.
Rommil Santiago 11:05
I’m surprised we’re all okay with just the term CRO.
Shiva Manjunath 11:08
But But, yeah, I mean, it’s just like so. So think about experimentation to some folks, a lot of folks will think of it as like, Hey, this is a, you know, a potential risk, because a test could lose, and we’ll lose money versus, you know, if we put X amount of dollars into engineering, it’s a guaranteed effort, because we’re putting X amount of dollars to build a thing, that will be a concrete thing at the end of it, versus an experiment could win or lose, theoretically. But I think that’s where a lot of the positioning has to be on our side improved to say it’s, it’s risk management, or mitigations, or risk mitigation to say, when you do this thing, that you’re just investing however many dollars to just build, you don’t necessarily know if it’s gonna work or not, yeah, you’ll have an end result of a thing. You don’t know how well it’ll do or anything like that, until you actually run the experiment. So it’s risk mitigation. But I think that’s where I keep on going back to like test to learn that test to win. Because if you’re paying for insights with your experimentation program, it’s effectively the same thing as investing in like a UX research program where you pay X amount of dollars, and you get these insights from this research that you’ve done. So that’s where I, I try and balance the positioning of experimentation is not just, hey, we run experiments to make dollars it’s, we run experiments to gain insights. And also we win dollars, and we get dollars.
Rommil Santiago 12:36
Yeah, I love that. Because there’s so many folks out there that think experimentation is all about the dollars. And they focus really heavily on win-rate. And it’s like, why do we even run experiments if we don’t make any money out of it? And it’s like, when you run an experiment, and you find out that something will lose, but it’s like, we just saved a half million dollars or a million dollars, or what have you that that is a huge thing. And even if it’s inconclusive, you could get a learning and you could build off it. That’s a huge thing. Yeah, I love that you’re talking about moving away from this money focus towards learning focused?
Shiva Manjunath 13:14
Yeah, I mean, that’s not to say you don’t not attribute revenue to the growth of your program, but a fixation only on revenue doesn’t can sometimes hurt the investment into it, because people will see it as well, I have a guaranteed thing versus I have a risky thing. You know, and, you know, it’s like investing in Bitcoin versus that I know, like a mutual fund like it. I’m not saying that’s actually what it is. But the perception could be Oh, I’d rather invest in something that’s stable growth, rather than swinging for the fences and possibly lose to Dogecoin or something like that.
Rommil Santiago 13:51
That’s how you pronounce it? I thought it was “doggy” coin, but boy, I feel stupid.
Shiva Manjunath 13:57
I mean, we’re talking about “CRO” versus C-R-O. So I could be wrong. Oh, I’m sorry.
Rommil Santiago 14:04
But you know, it’s a little bit of a two faced thing. And I’m very well aware. Usually when I try to pitch a program, it’s Look how much money we’re going to win. So it’s definitely on us to be learning-forward. Yeah, that’s actually a good point. So we’ve talked a lot about things that slow us down in terms of growing a program. I’d love to start exploring and how to make it go faster. Kenya, I’d love to start with you.
Kenya Davis 14:38
Like that’s the Holy Grail. How do we move faster? I don’t like the idea of moving faster. And just for the sake of moving fast, like we were in 2000 tests yeah me! Like, Oh, thank you for saying that. Like or, you know, now everybody wants to say I want 2000 tests a month just like that company like, for just for the sake of saying you’re doing it in high numbers isn’t isn’t the goal. And I think that’s, you know, all of us on this call, definitely agree.
Rommil Santiago 15:18
But Oh, actually I will have an asterisks on that. Just to just to quickly say something, I think at the beginning of a program, you have to talk about volume, but you quickly shift away from it. But it’s kind of like, if you’re starting a program and you run zero tests, leadership is going to go, what the heck. But once you get to a certain number, you know, like your, you know, if you feel comfortable that number, then I want 100% shift away, sorry, I’ll let you continue.
Kenya Davis 15:44
No you’re you’re absolutely right. It’s, it is a KPI of our program of the programs’, you know, health and success. But like, for the velocity, that that I would envision, that you really want is the ability to learn fast actually have tangible learnings. When it comes to an analyzing a test, you know, sometimes we get this analysis paralysis, or we’re tired of looking at the same, you know, the same tests from this the same year, and we’re rerunning them or, you know, everybody’s running a test and the data is polluted because no one’s running it on different KPIs like, that’s where, you know, you’re, you have velocity and numbers and but not in learning, because you’re spending a lot of your time in trying to isolate your own tests and what you could possibly pull from it. So, to me, that holy grail would be if you can figure out a way to launch a high number of tests and really ensure that you are getting the results, clean results from every test, or a high percentage of them. That is like the holy grail of a successful program at a high velocity. J
Rommil Santiago 17:06
Just to summarize that, what kind of KPI would you define for that measurement just for folks who are starting up their own programs,
Kenya Davis 17:13
I would say, I used to call it I think it was an accuracy rate. And the accuracy rate was, I have to are have two experiences, I launched it on this page that also has five other experiences. And I was able to detect a clear difference between my two experiences yielding an acceptance of my treatment, or of my control. What’s interesting, I didn’t have five tests on one page, and all of them are flat,
Rommil Santiago 17:42
Like, right at, at Loblaw Digital, we’re where I’m at, the metric that I try to focus us on is time to decision. So it’s kind of like this metric that does a few things, it measures the efficiency of a program. So from when we start working to when we get a result, we’re always trying to shorten that in every way possible. Is our workflow working, are we analyzing efficiently, like obvious that’s that’s a macro number, it could be broken down. But then we can start identifying where our roadblocks are. And it kind of includes some of that notion that you have around getting that learning that accuracy. Because the decision part, you know, we have to get a learning out of it. Shiva, how do you not I don’t want to say move faster. But how do you get more efficient?
Shiva Manjunath 18:32
Yeah, I mean, I think the biggest thing for me is just every experiment we have, maybe we have like a roadmap to say, these are the things we want to learn more about our audience, can we can we influence them in a particular way? Can we show more branding? Should we show less branding, so we have some like, strategies for things that we want to push into and see how our audience will generally react. But as we do these things, it’s okay, now that we have this roadmap for what we can do, let’s start identifying the designs, that we should move towards getting into testing these hypotheses. And as much as we can, I mean, I have a lot of pretty regular meetings with engineering and design teams, as a way to mitigate the amount of like back and forth that we would have with them in the test building process and the design process. So let’s say we have a design where we want to add the video to a particular part of a product page. I’ll be like, Alright, well, Hey, did you know dev team, how hard it is to add the video? Pretty easy. Cool. All right. Hey, designers pretty easy to add the video. What doesn’t make sense? Where does it make sense to add it? We’re kind of thinking here. And then the designers will give some inputs, and then we’ll go back to the design dev team say if we went something like this route, what would make sense what’s easier to do? What’s harder to do? And we use these as inputs to start guiding the decision to say, well, we believe this is a slightly better experience based on our hypothesis and based on the data, but it will require a little bit more time versus this experience is maybe a little bit less, but we still gain insights. And it’ll be quicker to test. So you start balancing some of these inputs in a framework type way. So you can start thinking about what is the quick things you should test into. But also, there may be some value in spending just a little bit more time to get it right. I tend to err more on the side of like, you know, lean and mean and fast rather than, you know, all the makeup on the on the design. Because you could do makeup later, you just got to get out the door first. So I don’t know, weird analogy, but
Rommil Santiago 20:43
So you touched on framework, many of our listeners use a lot of frameworks. What could that framework look like? What would you include?
Shiva Manjunath 20:50
Yeah, I mean, I probably be lying if I said I didn’t have a framework to prioritize my frameworks. Figured kind of factors would you include? For sure. I mean, I think like, the biggest things for me is like, there should be a layer of what is the data supporting the things that you’re trying to do. So if there’s a lot of data to say, based on five previous test iterations, we believe this is the best thing for us, that’s going to rank a lot higher. And that should prioritize it higher. If the dev team comes to you and says, this looks great from a design perspective, but this is going to take, you know two Sprint’s worth of work to do this. Versus instead if we just do this, it’ll take, you know, we can get fit into the next sprint that can prioritize so kind of dev left. And I think design lift is another big one, too. We’re just thinking about what is an MVP to get out and test and learn. But also consider that. Does that type of design box you into something design wise? Or can you quickly iterate from a development perspective to like, one test we ran a while ago, I won’t go super nitty gritty. But effectively, we had a landing page. And it was more like branded focus. But there was one conversion action. And we saw that this new landing page, tremendous engagement, but conversion rates actually went down. So we said, You know what, super low, dev lift, no design lift, let’s just update the links for that conversion action to just go to a different forum that we already have live. And that was literally a test that I built in five minutes, and we relaunched it. And it absolutely just crushed it. So So I think those three things are very important. And just thinking about like, what is the overall design lift? Is it quick to iterate from there as well, Dev lift? And just thinking about, like, what are you learning from this test, as well as it’s going to be extremely helpful? And can you iterate from there as well?
Rommil Santiago 22:49
Kenya you touched on a topic around education and re education? I was wondering if Kenya, if you had any tactics or strategies to address the education piece, other any tips that you have to make that go a little bit quicker?
Kenya Davis 23:06
Yes, um, put everything in one location. And whether that be, you know, JIRA Confluence, or not really, but Confluence or some type of wiki. And whenever you’re building out the educational piece of your program, everything needs to be segmented by who the audience is. So is this content meant for an engineer call it out, is it meant for analysts is it meant for data scientists isn’t meant for marketing is meant for newbies, add those tags and add that information in and create a space where people can navigate easily and learn is fun. And the transparency of what was done before is easy to navigate through so that they can find inspiration on this living breathing space. is what really drives people’s curiosity with with testing. So the documents themselves like as long as it’s got, like some type of format, you know, whatever words, slap a logo on or whatever you want to do, but you know, making sure you have the right audience in mind will also help with like, how you’re how you’re writing that out, because if I’m trying to explain factorial testing to you, and you’ve never tested before in your life, you’re not going to be motivated to come back or to test or to do anything with factorial testing. You’d be like what A/B sounds simpler to me.
So that’s, that’s definitely what I would recommend. Ken Yeah,
Shiva Manjunath 24:42
That’s really funny that you say that because as CRO-ers, we obviously literally look at segments to optimize the digital journey and we don’t have a single landing page, talking to everyone. So. You are. That’s the most like the most meta thing I’ve ever heard. I love it. It’s fantastic.
Rommil Santiago 25:04
Had a moment of Oh, man.The only thing I would add in terms of the education piece, it’s something I’m doing now. And I thought it was, I want to hear your thoughts. It’s around trying to educate developers actually on tooling. Yeah, realizing that you have to train and train and train them like, well, this is this is taking up all my time. So what we’re doing now, or at least starting to do is create videos, creating a video walkthrough of the flow of the data, the architecture, how did to how to debug certain things, and having a library of these videos for them to watch? Because it’s a little bit easier than reading a whole bunch of heavy stuff? And then having these regular touch points with Do you have questions for all the new folks. And that’s when you you field a lot of these the questions that they have, and perhaps create new videos from it. So I don’t know if it’s gonna work out. I like the idea. Because it’s my own, obviously,
Kenya Davis 26:03
I promise it’ll work. We demos for showing you how to use a sample calculator showing you how to open data and target how much how many users? What type of code snippets Do you need, depending on your platform, what type of coding works like, how do you check it, and you’re so right that sometimes, like, people don’t want to read, if there’s too many words, their eyes go cross, and they stop, they like diagrams, they like videos, and then the next supplemental content to go with it. So that’s I promise you, it’ll work your way up, I look forward to it. I did actually want to touch on Shiva, you’re, as you’re describing, like how you prioritize or how you identify the impact of a test, we should call that the prioritization score, and I promise you it had the same components. And it it really like helps people to see everything from a different perspective, like, I want to push my test Well, why? Because it’s going to have this impact. And then you put in the how much design is going to take and how much engineering work and, and once you see that number come out, you know, people think differently, you know, leaders will go in and be like, Well, it looks like you just got pushed to the bottom. But it also shows how something as quick and scrappy as like, you know, you realize there was a quick change to make, and you made that change and had an impact, you know, that that still would have been represented in that type of score. So I would also highly recommend if a company does not have a prioritization calculation to use for their testing program, that will help you tremendously
Shiva Manjunath 27:49
I was just gonna say, to build on that Kenya, I mean, having that framework to show a design and show a test to say, it’s going to take a long time, it’s going to require these resources, it’ll be difficult. Just seeing that score in context with everything else can also be very solid evidence, if you present that to a leader, and the leader comes in and says, Hey, I want to do this, okay, we’ve done all the work, it’s deprioritized because of these reasons, that’s fantastic evidence for you, for that leader to be like, Alright, well, crap, let’s get this, let’s, let’s get this crap prioritized. And then they will go to the engineers and say, we give you more resources. And, hey, that’ll speed up your program. If the leaders Look at that, and say, I really want to do this test, I will give you more resources for you to accomplish what you need to accomplish. And bada bing, bada boom, you’re getting more leadership investment from there too.
Rommil Santiago 28:41
The only thing I’d call out there, and I 100% agree that every company should have prioritization frameworks. I don’t care what it is, whatever letters you want to use, as long as it works for you. The only thing I’d highlight is when picking one or developing your first one, that may become the most political thing. Oh, yes. So be prepared for this, no matter who’s using it. You’re gonna go this back and forth with well, who’s gonna pick that score? And what’s a zero? What’s it two or whatever the number is? And I no matter how many times I’ve done this, it becomes like at least a week of back and forth and arguments. And eventually people come along, you know, but I just wanted to manage the expectations of anyone who doesn’t have one, that the first gos might take a few tweaks and a few conversaions.
Shiva Manjunath 29:36
That’s why I have the Shiva score column that’s weighted at 99%.
Rommil Santiago 29:41
I forget about the Shiva score!
Shiva Manjunath 29:42
Yeah, it’s a great way to get things prioritized.
Rommil Santiago 29:47
It’s a ten! Oh, this freakin Shiva score.
Shiva Manjunath 29:52
It’s the framework, it’s, I didn’t do it was the framework.
Rommil Santiago 29:58
All right. So I think guess that wraps it up for another episode of CR-No. If anyone has any questions, feel free to reach out to us on LinkedIn and on our website. Until next time, thanks for listening.
Shiva Manjunath 30:10
Kenya Davis 30:11
Thanks for having us.
Connect with Experimenters from around the world
We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.