CRO Organizational Structure- Centralized Team vs Process: with Oliver Paton

AI-Generated Summary

  • Managers in agencies have encouraged me to sell my successes and go off the back of that, but selling answers to problems without a structured process isn’t the right track. I’ve attempted to put previous winners from hotels.com on other websites, and it has failed. Experimentation is a highly subjective field.
  • In the experimentation process, it’s essential to start with between six to ten experiments to set the expectation that not everything will be a winner. This approach allows you to learn from failures, a crucial aspect of experimentation.
  • Customer-level experimentation is the future, shifting away from cookies and evolving to directly impact individual customer experiences, not just a CRO-driven tactic.

Video

Audio

AI-Generated Transcript

Oliver Paton 0:00
And even though I’ve been encouraged by managers in agencies to sell my successes and go off the back of that, when I, you know, we talked about it being a process. I’ve gone with previous winners from hotels.com and tried to put them on other websites, and it just fails. So, you know, it’s so subjective. And if you’re not going with a structured process, and you’re trying to go so answers to problems that I don’t I don’t think you’re necessarily going down the right track.

Claire More 0:32
Thank you for joining us, Oliver.

Why don’t we start off with why don’t you start off by telling us a bit about yourself

Oliver Paton 0:41
war? Yeah. So I’m gonna go by Ollie typically. Yeah, I’ve been working in experimentation. Forever. Like, as long as I can remember. That’s probably quite accurate, actually, based in London. Yeah, so I’ve been doing experimentation. Since probably, gosh, it must be 2009, I would have thought. And before that, I was working in kind of media. So a lot of online stuff. It’s paid search basically before that. And that’s where I kind of started to work in small parts of experimentation in on the media side of things, and really got got hooked into that and managed to land a job with Expedia, where they were kind of at this pivotal point where they they done, I think it was three experiments on a joint with hotels.com. And we managed to ramp it up to 1000s in a year, over the course of a couple of years. And I was lucky enough, yeah, to kind of join a team that was really amazing at what they did. And I was kind of I was pretty scared at the time, because I was relatively new to the subject quite new to analytics. And yeah, I really got schooled there and man on in many, many domains, databases. dB two, I think it was at the time. And yeah, from there, I’ve consulted, I think, yeah, over 150 Different companies, virus center and various other agencies in London. And yeah, had had the pleasure of working with some really amazing CRO specialists as well along the way.

Claire More 2:27
Nice. So yeah, Expedia, I know is one of those major sites with an amazing experimentation culture. I don’t know if they’re, like booking.com. I know, as they really try to encourage everyone to experiment, anything that they want.

Oliver Paton 2:41
Definitely, definitely, yeah, I feel like guys@booking.com are definitely more, at least at the time, they were more audible about what they were doing. And we were a bit more secretive. Which now we’re kind of regret because we’re, like, everyone always talks about booking.com as the example. Which is funny, because I think Lucas, one of the main guys@booking.com At the time, you know, a lot of what he says we were doing at the same time, almost in in an identical way, as well. And his approach was very similar to ours at the time. So yeah, we weren’t copying him. We weren’t.

Claire More 3:20
All right. It’s on the record. Well, that’s great. I mean, that kind of leads us into what we want to chat about today, is, you know, fostering a culture of experimentation. You obviously have tons of experience kind of spearheading experimentation within companies. So I guess maybe we can talk start by talking about whether you believe it’s best to approach CRO as more of a process or whether it’s better to just have a singular team devoted to it or a bit of both?

Oliver Paton 4:00
Yeah, yeah. Good. Good question. Um, yeah, let me answer that in a really roundabout way. But first thing is a reason why I love experimentation, I think most people a lot of people do is that it’s so multifaceted. And there is no one right answer. And I think we sometimes we forget that when we read posts on LinkedIn, or people write about their context a lot. And it’s, I just want to preface this with there is no one right answer. And when we look at optimization as a specialism, you can go to any one specialist in different companies, and you ask them what they do, and they may come back with a very different answer. And that can be because they have a centralized CRO team. Maybe they don’t have a centralized CRO team and CRO sits within analytics, finance, marketing, product, UX and design. It can sit in so many different places, and that can give us very different contexts and different approaches. But I do think that wherever it sits, there’s always a centralized process. And that process really involves people in all of those different pockets. And, you know, you can draw that process out in many different ways. And sure, again, you’ve seen lots of different process diagrams of different from different experimentation teams. One thing I’ve, I’ve noticed recently, and I think some other people like tun Westing, and a few other people have kind of said, The Death of CRO which I think was a mis representation myself, I used his as his, his death of CRO is more of the CRO volution CRO evolution. And if you’re doing it right, your team needs to change over time. And it goes from a kind of almost as it can go from a centralized team to gaining gravitas and momentum. And then you really need to scale it need to then push it push the responsibility of test builds, building out to developers enable other teams to deliver experiments in a uniform way. So the real answer is, is both depending on where you, you sit on that evolution. If you’re quite early, then I think it is a centralized team. If you’re quite late on that evolution, I think it’s potentially can can still be a centralized team. Like I say, there’s no one right answer. But I think it steers you more towards having some centralized experts to govern the process, and then pushing out the responsibility. And one other thing that I’ve been thinking about a lot in this area is, especially when you create a centralized part of a CRO team, and they have designers and an analytics guy, and a UX. And a researcher, is what happens to all of those roles and responsibilities in main part of the team who want to get involved in that. And it can be quite divisive, I guess, like a centralized team. So think I’d I’d steer towards if you if you had a choice of setting up a centralized team, do that very carefully, if that makes sense.

Claire More 7:44
Okay, yeah. And in this kind of evolution, where you’re going maybe from more of a centralized team to a culture or a process involving a whole bunch of different people, what size of company maybe have played a huge factor in that? Or is it just just generally like, could be a huge company who’s just starting out with this concept of optimization?

Oliver Paton 8:08
Yeah, definitely. Yeah, I think I think size, the amount of traffic, you have number of product journeys you have, you know, if you’re a startup, you might just have one sales journey. Whereas in a very mature, large corporate company, you might have several different products, you might might also have, you know, customer areas of your website or your app that have millions of views every day, which you can also buy. So you end up then with, what can typically happen is lots of CRO teams, which is something that last year, last five years, I’ve been working for sky, and you know, they had at the time, they’re currently trying to centralize it. A couple of different experimentation teams based on which part of the website or the digital estate you’re looking at. I’d say also quality of your development teams as well. One thing I always kind of map as what I think is driving towards maturity, is your ability to deliver experiments on server side. And again, pushing that responsibility into your development teams, as opposed to having client side builds, and maybe an agency doing it. I forgot where I was going with that.

Claire More 9:34
Getting Yeah, developers, developers,

Oliver Paton 9:39
sorry, yeah, the quality of development teams, but is it’s kind of you need high quality development teams to be able to do that. If you don’t have that. And you can recognize that. You’ve got them releasing loads of bugs, it takes them they say that their sprint is every two weeks, but they deliver every three months because there’s lots of issues then You know, do you do you need that centralized team who can actually build experiments for you can deliver a lot more quickly? Do you need to outsource that to an agency? You know, think that that is really important as well. Although I always watch out platform stability is always going to be a pain in the ass. If, if you can’t, you know, sort sort that one out. Like I said, traffic volumes, and your number, the number of experiments. And yeah, number of product journeys, I think, are all kind of continuums on that maturity path that you need to consider when you make a decision of whether to have an agency, a centralized team, or a distributed team.

Rommil Santiago 10:47
This is Rommil Santiago from experiment nation. Every week we share interviews with and conference sessions by our favorite conversion rate optimizers from around the world. So if you liked this video, smash that like button and consider subscribing it helps us a bunch. Now back to the episode.

Claire More 11:00
So how would you go? I guess, maybe briefly, how would you go from being just a large booking website that doesn’t do any sort of experimentation to what Expedia does now? Which with an entire kind of company wide, it seems culture of experimentation?

Oliver Paton 11:25
Good question, again. So so multifaceted, is it could be an extremely long answer. Starts start with, again, it really depends on the senior stakeholders that are sponsoring, yeah, their maturity, right, if there. And then man’s head mentality, I guess, if they’re quite accepting of risk and failure, then you’re in an already in a good position. What I always like to do is at the very start of the process is never start with one experiment, always plan to start with between six to 10 experiments. Because as soon if you can do that, then you can set the expectation that not everything is going to be a winner. And you can learn from your failures. And I think that’s, you know, that almost that green card to fail, is really important in experimentation. Otherwise, you get into the habit of testing things that you know, we’re going to win. And I think, and again, this is controversial within experimentation is how much? How much time and effort do you put into research, I think you’ll get again, lots of different answers from different people. I like to not minimize but do the right research and then move to the experimentation as quickly as possible. Because you know, you’re expert, your research can give you lots of value, it can tell you where your problem areas are, it doesn’t always very accurately tell you what those solutions should be. The The quicker you can get to testing those solutions. And again, I always say, experimentation is a form of research. So yeah. I’m diverging from the point again, which I tend to do, but I’d say, yeah, getting senior stakeholders to accept that, you know, there’s a risk there. And losing isn’t a big problem. I always think, you know, it’s a news feed. And if you can balance the good news with the bad news, then you’re in a good position. And again, getting to volume of funding of experiments is a really good place to start.

Claire More 13:55
Yep, I’m just gonna jump in. That’s a I come from an agency. And that’s, I mean, the biggest obstacle we we find, communicating results to our, you know, like, stakeholders of the of our clients, and communicating the value of like a loser. So one thing that we’ve actually adopted is completely change. I don’t think it’s an original, I think we’re not original and doing this but changing the mindset from fails or when to more implement versus iterate versus abandon or move on from that idea. Yeah, thank ya can help change? kind of win loss?

Oliver Paton 14:39
I think, yeah, and a lot of, in a lot of cases, I’ve seen agencies, so the fact that they know the answers, right. And even I’ve been encouraged by managers in agencies to sell my successes and go off the back of that. When I you know, we talked about it being a process I’ve gone with previous winners from hotels.com and tried to put them on other websites and it just fails. So, you know, it’s so subjective. And if you’re not going with a structured process, and you’re trying to go sell answers to problems that I don’t I don’t think you’re necessarily going down the right track.

Claire More 15:19
No, you really limit yourself when you’re like, Okay, this test last, let’s try something else. More Yeah. Trying to get a culture of thinking of losses as learning something to learn from.

Oliver Paton 15:34
Yeah. Yeah. What I’ve always struggled with is when to give up on something like it was in iteration number seven, shall we? sponsor who’s like, really wanting to go for it? No, we’ve committed now. Yeah. That’s, that’s another another question, I guess.

Claire More 15:57
Yeah, for sure. It’s you have to go. That’s when the research is important in your hypothesis, and whether you’re testing things that aren’t a real problem, or if you’re iterating, maybe the eighth one, if you’re at finally solving a problem will will perform well, yeah, that’s

Oliver Paton 16:13
it. Have you ever done decision trees? Where plan beforehand, and you kind of if it fails, then these are the things we’re going to look at? And based on those things, you do X, Y, or Zed?

Claire More 16:25
Yeah, yeah.

Oliver Paton 16:27
I haven’t. I haven’t used them myself. But I think it’s a really good, good framework.

Claire More 16:31
Yeah, we’ve never used a kind of test specific decision trees. I think that that would be a really interesting thing to try. Just more kind of an umbrella decision tree on when to move on from a test and when to continue to iterate. Yeah, definitely. Cool. So yeah, getting first step is getting buy in making sure that the higher level stakeholders are bought into experimentation, what would step two be,

Oliver Paton 17:04
although it’s so good, as mentioned, the background, the background noise, which get the analytics team to do but like just always make sure your data is accurate, right, which is getting harder and harder to do, and that we’ll talk about customer level testing in a bit. But like cookies are all over the place. And I don’t think everyone’s realizing that with Safari, deleting cookies, if your AB testing platform uses cookies, you’re you’re probably allowing customers to move from one variant to another. And I think that’s widely ignored at the moment. But yeah, testing the accuracy of your platform, checking it against database orders, or those types of things, I think, are really important, as well as looking for SRM. And yeah, for various other measures to make sure they’re accurate. And then I think internal PR, is gonna get you a long way, if you can learn to communicate results in a really punchy way. I want to say punchy, I mean, brief. And I think what I’ve always said, experimentation is like the gateway drug to data science, in that it really gets people hooked onto it. It’s so simple for senior leadership, who are not necessarily data savvy, if you make it palatable and enlightening in bite sized pieces, to get them hooked into experimentation, and once you’ve you’re showing them results in a single slide, though, they’ll get hooked really quickly. And that’s how it will grow and scale really quickly.

Claire More 18:47
Yeah, for sure. I can see that snowballing across all sorts of different areas of companies once they see the success of yeah, in one. Cool, well, yeah, super interesting. One. Is there. Would you say that they’re going back to team versus process? Is there a specific moment that a company should kind of pivot from more of a centralized team to a more general process or experimentation? Culture?

Oliver Paton 19:27
That’s a good question. Good question. Again, there’s so many variables in it. I don’t think there’s one pivoting point. But I think once it feels like there’s a there’s a bottleneck with the experimentation team. So you’re either gonna hire like a massive experimentation team, so there isn’t. Or you need to enable multiple teams to deliver experiments. I’d say But the other part is probably are there people in teams that feel like they want to be in experimentation, but they’re not. So then you can look at that the, the experimentation process. So like, hypothesize, validate ideas, prioritize design, put in analytics requirements, build the experience, QA live, analyze that none of that requires, like a CRO specialist, though, like, if you if you looked at it afresh, without already specialist there, you’d have volunteers from all around the company. And culture is built around people. Right? So if you centralize it, you’re not growing a culture. You’re, you’re you’re put, you’re like, siloing it, you’re creating a new country. Right? Whereas if you’re, keep getting people involved along the process, and again, this is why, in a couple of my videos, I’m saying where we’re becoming a shepherd, no longer a specialist, but we’re a shepherd of, we know the process really well. We’re trying to piece this together and get the right people involved. And if you can do that, then you have a culture. And then your test of culture is when you leave. Does that process still continue? Because if it doesn’t, you’re you haven’t changed the culture in any way would just fall apart. And you’re still that pivotal piece to it. And that’s kind of my my test in a way of have I succeeded in creating experimentation culture, if I leave? Does it still continue?

Claire More 21:32
Right? Well, all very interesting. Let’s move on to experimentation at a customer level. Obviously, you brought up that safaris, removing cookies. So yeah, why don’t you explain what exactly you’re talking about when you mentioned experimentation at a customer level? And how is it different? Yeah,

Oliver Paton 21:57
sure. So there’s, there’s kind of four scopes to experiments. And we typically only talk about one of them. So there’s an impression scope, which is typically used for click through optimization. So it’ll be used on Google search. You can use it in your search results listings on your page on your results page to, you know, decide what sorts or do you want to you want to display, you have a session. So do you want to try and convert someone within a session user or cookie, is the one that most people use these days, which is, allows us to assess whether someone converts or comes back and converts over a longer duration, which I do think in most cases is the right choice. Unless you want to use multi armed bandits, which just just doesn’t work at a cookie or user level, which I can explain in a second. And then, because of the cookie complications, and the thing, the things that are changing in the market. And I think COVID To some extent, to some respect as well, I think people are starting to focus more on customer. And I think we as an industry we talk about, it’s not it’s not CRO anymore, it’s experimentation. I think with along with that kind of ethos, people are trying to start running experiments that affect across channel. So it might be that you run an experiment to increase conversion online, but also decrease calls in the call center. Now, to do that, and do to do that assessment, you need to assign a real person, as opposed to a cookie on a device, right? Thing that more and more. And now that I know that I work in insurance, I’m in a great position in the UK, insurance companies. All of their traffic comes from price comparison websites. And we know who the customer is almost for most of our customers, you come into the website, we know who you are immediately. So it’s perfect for this type of thing. But previous to that, you know, especially over COVID lots of clients were trying to retain customers when everyone was on furlough. Some people were getting laid off. And we did

Claire More 24:26
sorry, furlough is like the government sponsored. Basically,

Oliver Paton 24:32
free money. We’re paying for it now. Believe it or not. I think everyone is. But yeah, so people were kind of making cutbacks. People were being laid off. And companies were trying to retain customers. And a couple of years ago, again, for a big telco we were trying to retain their customers. is, but also allowing them to downgrade their products so that we will retain them in the long run. We will also will also allow them to download online to reduce the call center volume of people dying trying to downgrade because previously, they always tried to force you down the call center to make it a little bit more difficult to download. Yeah. Black Hat or the the dark UX I guess. But yeah, basically, real customer level experimentation, I really believe is the future down to cookies, and down to experimentation, growing legs and growing into the customer area. And not just being like this the CRO driven tactic? I guess.

Claire More 25:54
So, in terms of using customer data for that, do you foresee any specific ethical considerations? Everybody’s really concerned? Obviously, in Europe, there’s GDPR? Yeah, how would how would we navigate and keep a strong, you know, land of experimentation.

Oliver Paton 26:20
I mean, I think you have to have opt in anyway, in Europe, you have to have opt in for cookies. And there is argument of whether you have to be opted in for experimentation or not. And that is very unclear. To be honest, when you read the documentation, GDPR documentation and pecr documentation, it’s pretty unclear as to whether you really need it, you can make arguments to say, you know, it’s it’s part of how we monitor our product. And you can then legitimize it, but it’s kind of questionable. So it’s, it’s been an issue for a while, just even with cookies, I think now, it’s going to be very similar to marketing contact, preferences, in that you should probably gather consent. Yeah, you should, you should probably gather consent is going to be more tricky. I mean, in a way, there’s, there’s kind of three methods to actually ident identifying someone, right, so you can get them to login. You can use a customer data platform, which is at some point, you’re going to have have have to have logged in, but it can still then recognize your or map your cookies that do exist, or any other identifiers from third parties, and plug that into your experimentation platform. And then you can use a mixture of pre assignment. So that’s taking your whole customer database, splitting it into randomly splitting it into segments, and then feeding that into your current experimentation tool. So your experimentation tool can target a massive list of customer IDs. And you can you can use Optimizely, or any other AB testing tool to do that. And one other thing is as well as Optimizely, full stack. I don’t work for Optimizely, by the way, but other experimentation tools are available. So yeah, ultimately allows you to use a third party ID as its primary ID. And I think there’s probably other tools out there that allow you to do that. But you could use your customer ID as that primary ID and then when you have your assigned then Optimizely can use that customer ID to allocate you into across different devices as well. Which is pretty neat.

Claire More 28:46
Yeah, it kind of, I guess, along with the evolution of CRO or CRO evolution, it’s more or less around the website focused experimentation and more full stack. It seems like we’re going to have to pivot that way.

Oliver Paton 29:04
Definitely. Yeah. I think stuff like pricing experiments, always been a bit of a gray area. But if you can control them across multiple platforms, you know that that’s something you could open up. Which could be really cool.

Claire More 29:21
Earlier, you mentioned the use of multi Armed Bandit tests. And yeah, you’re not a fan.

Oliver Paton 29:30
I’m not a fan. It’s not really possible when you assign a user level, right? So multi Armed Bandit. The analogy is that you have lots of slot machines. The challenge the difference between that and a user, a customer a real person is that as a customer goes through a sales journey, they become more and more likely to buy in most cases right? So if you look at a visit curve or session curve, and say how, on average, how many, how many sessions is our customer, customer base have before they buy, it might peak at one or two, depending on your so my peak at seven, depending on your product, a multi Armed Bandit, the probability of it paying out each time is normally the same. Whereas a customer, it increases over time. And you get this thing called Return user bias. Which is exactly the reason why you can’t change splits over time against a user, you go from 5050 to 9010, then the proportion the next day of returning customers will be different to new customers. And your 10 Split will almost always have a better conversion rates. And that’s because you’re adding less customers, less new customers, but your returning visitors are the same. Right? And the more you change it, the more it’s going to break. So that’s why it’s always a big no, no. But then you have these experimentation platforms not going to name any in this instance, who say we can auto optimize. But they’re also saying that signing customers, so it users at a user level. You can’t do that, why it’s in your experiment. So therefore you need to operate multi armed bandits at an impression level. And they’re great for doing that Google will do that. But in impression level. Really, what you then need to be doing is only assigning visitors once you assess whether the multi Armed Bandit paid out or not. And then the next visit, they’re not in it anymore. They’re not in the experiment. Because if they convert the second time, you don’t know whether that’s because they played a different machine. Or because they’re a returning customer and they’ve got a higher conversion rate. Because they’re further down the decision journey. So I think there’s a lot of platforms out there haven’t thought it through properly if I’m honest.

Claire More 32:12
Yeah. Especially with the with the switchover with GA for to user level or user scope data. Yeah. Hopefully they’re paying more attention to that.

Oliver Paton 32:22
Yeah, definitely. Yeah. I don’t think many people have thought about that return user bias properly. From those platforms. I think sometimes the platform’s like, here’s here’s an amazing service, but they’re not. They haven’t used the product as much as I guess, consultancies, some cases.

Claire More 32:44
Right. Cool. Well, I think that pretty much wraps up everything I wanted to chat with you about today. And it was a great discussion. I love the analogy of testing, or AV testing as a gateway drug for any company.

Oliver Paton 33:09
You never know if people are going to take that in the wrong way. But yeah, like?

Claire More 33:14
Well, yeah. Hopefully, they’ll take it in a positive way. Yeah, um, was there before we go? Is there anything going on in your life that you’d like our listeners to know about?

Oliver Paton 33:31
Um, good question.

Claire More 33:38
I didn’t I didn’t party that you want to

Oliver Paton 33:41
know what probably I had. We had our summer party last Thursday, and I’m probably still recovering. So it’s boy, I’m a little bit slower in response to the question. No, I mean, I’m just working so hard at the moment on pet projects on the side, started a bit of an E commerce business on the side, which is doing pretty well, you start something just thinking I’m going to set up a website to experiment with analytics to like set up GA for before my company does. So I know what I’m doing. Boring. And uninspired by loads of kids on Tik Tok importing stuff. And I’m actually kind of starting to make a bit of money out of it by actually, which is, which is kind of cool. So if anyone needs any RC cars in the UK, go to www dot radio dash control dot code at UK. And you can get them there. So it’s kind of Yeah, looks terrible. I’m not proud of it, but it’s working. So yeah.

Claire More 34:47
That’s awesome. I’ll definitely check that out.

Oliver Paton 34:52
Everyone’s got to have a nice.

Claire More 34:54
Yeah, definitely. Okay, well, thanks so much. How can people reach Are you

Oliver Paton 35:01
on LinkedIn? Always there? Oliver Peyton one I think I’m also the Oliver Payton but I lost lost the account. So yeah. Oliver Payton one. If I think that’s right, put the link somewhere. Okay.

Claire More 35:15
Yeah, we will. Or Radio control.com Radio dash

Oliver Paton 35:21
contact will be a robot that will probably reply but you know it gets to you eventually.

Claire More 35:30
Cool. Well thanks so much for joining us on the podcast. It was super interesting to chat with you. Yeah, I hope you have a good rest of your week.

Oliver Paton 35:40
You’re too nice to meet you.

Claire More 35:42
Thanks. Cheers. Bye

If you liked this post, sign up for Experiment Nation’s newsletter to receive more great interviews like this, memes, editorials, and conference sessions in your inbox: https://bit.ly/3HOKCTK


Connect with Experimenters from around the world

We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.

Rommil Santiago