Experiment Nation - The Global Home of CROs and Experimenters

View Original

Build or buy your Google Optimize replacement with Kenneth Kutyn

Video

https://youtu.be/zaB90hnm4Mc

Audio

AI-Generated Transcript

Kenneth Kutyn 0:00
Does it give you a competitive edge to build your own experimentation product? No companies out there building their own Google Docs, because if you are Nike, you're not going to beat Adidas by having a better word processing tool. So the same thing if you are if you are a Nike and you want to experiment better, are you gonna get a competitive edge from that? Do you have the engineering resources to build that? Or are you just going to do it to try and save costs because it's often the opposites true.

Charlotte April Bomford 0:31
Hi, Charlotte Bomford of the experiment nation Podcast. Today, we have a special guest. His name is Ken Kutyn. He has more than six years experience working at at experimentation software vendors, based in Vancouver, San Francisco, London, Amsterdam, and now Singapore. And he has helped customers across EMEA, and APAC build and run experimentation programs. He also wrote his master's thesis on the role of experimentation in enterprise software development, and has a passion for data visualization and data driven decision making. When not talking a B testing, Ken loves to travel as you can tell by the the places that he went through and cook and explore Singapore with his family. That's a very interesting journey I've been in can I'm pretty sure the listeners would want to learn how you got into experimentation. What role are you in today?

Kenneth Kutyn 1:29
Yeah, definitely good to chat with the charlatan? It's good to be here on podcast? Definitely. So how did I get into experimentation, I was working Oracle on the go to market team for their BI products, helping with data visualization. And I didn't know much about AV testing, you know, we all kind of heard it vaguely in the news about this distinct Google was doing. And that was about it. And I came across a role of a company and called Optimizely. And started looking into it, I thought it was really neat that you've got this discipline where there's a UI and design side to it, there's a tech side to it. There's the Matthew statistics to it. And the more I looked into it, and end up getting the role and kind of fell into this world, and was really excited to be part of it. This this incredible new way to make decisions to build better experiences to ultimately make products better for customers, and to do so in such a data driven way that that never really been possible in the past, at least not at this scale, and frequency of iteration. So it's kind of kind of how I arrived the experimentation world a little bit by accident, but I've been loving it ever since.

Charlotte April Bomford 2:39
Where do you see yourself going with the with the role that you have today?

Kenneth Kutyn 2:44
Yeah. I think that's part of what I got the job, you know, and you're working with companies on the cutting edge of building great products. And on continually trying to just strive to build better experience for our users. There's kind of been this interesting journey for experimentation. You know, we used to think there's just a B testing, we call it CRO sometimes. And maybe like, five, six years ago, it's very common that companies had a CRO specialist to CRO manager who often worked a little bit in isolation, you know, you're not doing something so different. Yes, yeah, that's how it still looks for a lot of companies. And definitely was was the norm six years ago, where you've, we've got one person in this role, often even at a big company, one one person who's taking on Oh, just improve our conversion rates. And they're doing a lot of AV testing. And they don't get a ton of support from marketing, they don't get a ton of support from from dev or from product. Often, they fall into the marketing umbrella. And, and they're, yeah, they're just trying to tweak and improve things. And we've seen this evolve a lot, right? Companies have seen great successes from experimentation, and some great ROI as well. And that's led to a broadening of scope, which is great, more resources, you know, developers who are now part of CRO teams more, or UX time to devote it to experiments. And in parallel, there's been a lot of adoption from product teams. So well, engineers have been testing for a long time, right in terms of basic bluegreen deployments and simple feature flags. But now product teams are really buying in and testing every feature. And realizing that yeah, we should consider everything we launch as an experiment, until we actually know that there's user value attached to it. You know, it's just an idea until we reached that point. And so that seems to be where things are trending. And I mean, more broadly, where does it go from here in the future? Good question that humans bet. You know, there are lots of lots of trends happening maybe in parallel. Everyone's trying to inject AI into everything right now. Some mix And, you know, there's always in the software world. So I've been working with software vendors for experimentation for the past six years, we see these cycles of consolidation. And then new best of breed, standalone players pop up. And then there's a cycle of consolidation again. And we've seen them experimentation, where a lot of companies like your marketing automation tool, well, guess what, now you have an AV testing feature. So you can test your marketing messages, your landing page design tool. Great. Now you also have Maven. So you get these big players who actually be testing features, you've got the focus up testing players, and everything in between. And I think that's like a will kind of continue to happen over time as well.

Charlotte April Bomford 5:44
Yeah, I agree. I agree. Because there are again, like, like you've said, there are a software for email marketing, and then they would inject like, oh, you know, you can a B test, what type of Hido variation would be perfect for a specific segmentation? That's really interesting. You've also mentioned something about like, how do you feel about AI being injected to different, you know, experimentation software's? Is that something you feel like would be an advantage in the future or disadvantage in the future?

Kenneth Kutyn 6:14
Yeah, there are 100% use cases, I would never debate that. What I'm wary of, and I hope and think a lot of people are wary of it's, it's a lot of marketing hype, right, we have to cut through. So you go to the landing page for a software product and says AI driven experimentation, optimize every experience every user, and you got to think like, what does that mean? Is it reliable? And, and what I would come back to is, Is it measurable? So the point where you're delivering a unique experience to every visitor? How do you determine what's your control group to see if those experiences are working?

Charlotte April Bomford 6:51
I agree. Yeah, that's a good point. Yeah, yeah. And so hopefully,

Kenneth Kutyn 6:54
you can find a sweet spot where yeah, there were use cases for AI and ML. product recommendations is an obvious one, right? And then you can't have a holdout group who gets a static list of product recommendations compared to the people who get their personalized list. And that we're able to make some statistical inferences about the uplift, those wrecks are driving. Even though we are getting down to the one to one level of personalization, but what we want to avoid is just AI for the sake of it, you know, this kind of black box of thrown over the fence, see what comes back and assume it's better than the existing model or assume it's better than the more human driven model.

Charlotte April Bomford 7:36
Okay, so let's get straight to it. Google optimized. sunsetting in 20, September 2023, a lot of companies they're like going crazy scrambling right now, I wasn't really scrambling like let's just say there. You know, because there's a limited amount of time. They're now like looking for a replacement. Right? Now, for those who have haven't found a replacement yet. What would you advice for them? Like when looking into these features?

Kenneth Kutyn 8:10
Yeah, good. Great question. It's certainly top of mind and experimentations Hero World. And I find it really interesting like the the sentiment on LinkedIn, because we've got this Google Analytics sunset as well happening around the same time. And there's so much content out there and resources and migration guides for GA for that, yeah, some people are complaining about GA for some people aren't some people are ready, others are not but But everyone's kind of talking about, is it a good fit. And here's my migration guide. If you look at Google Optimize, on the other hand, the sentiment is 100%. We don't know what we're doing next. And there's not this same kind of obvious migration path. And there's not as much content out there on here, the best things you can do to prepare yourself and to find a better solution than to transition. Right. So it's a very different field, despite the fact we have these two sons that's happening almost coinciding with each other. But yeah, I'm gonna start to break it down and think about like, what, what can you do if if you are on Google Optimize right now. And let's say you're using it with some, some consistency running a number of experiments a month, and you want to keep that going? A few choices. The first is, you know, you can try and find something similar. There are a whole number. We can call them visual editor on a client side type. What You See Is What You Get testing tools out there. Ranging from, you know, some very low cost to free ones to more expensive ones that are part of like an Adobe and Oracle suite and again, everything in between. That seems like the obvious choice, right? But there are a few reasons to maybe put pause before you jump into that. The first is that, you know, it's, um, it's, well, there's only one way to experiment. And the market, the industry seems to be shifting towards a lot more what I would term full stack experimentation. So experimentation more tightly integrated with your code base, usually powered by SDKs, and API's. And that starts to unlock new use cases, starts to unlock better performance, more ability to experiment across channels, deepen your tech stack, and so on. So that's one reason to hesitate, maybe you're ready for full stack experimentation, there are a few ways to kind of gauge that. Yeah, I

Charlotte April Bomford 10:39
was about to ask you that question. What do you mean by? Like, if it's, you know, like, we have different types of listeners we have with the app, the experimentation, would you be able to explain or expand further on what full stack testing is? And how do you know if the company is ready for full stack testing?

Kenneth Kutyn 10:57
Yeah, certainly. So what we have with Google Optimize and similar tools, is you typically have a JavaScript tag you add to your tag manager or your HTML file, it downloads some JavaScript and run some experiments on your site. And so you get a nice visual editor, you can click and drag elements and make changes. And largely you can be independent of DEV for simple experiments. where this starts to fall over though, is, I want to build something a bit more complex, now you're getting development adults can still do it through a visual editor type tool, but you do need the developer to help you. It also means that all your experiments are limited to what can run in the browser. And more and more companies, you know, their experiences do not stay in the browser, you've got mobile apps, you've got kiosks at an airport, you've got logic that exists server side, on a watch whatever it may be. So you need to be able to experiment across these channels. And you can't do that with a tool that only runs in the browser. So that's where full stack testing comes in the the idea that, hey, let's give you a way to get an experiment decision. In anywhere you have code anywhere, your product or your logic exists, we want to let you experiment there. And that kind of really opens up a whole number of new use cases, to testing teams. It's a very different way of working. All of a sudden, you're you're launching experiments the same way you launch features in code, and it's well suited to a product team cross functional product team where you've got PMS and engineers, and hopefully UX analysts working on the same team building features and experiments. Compared to your visual editor tool where, you know, one person or a marketeer with little technical skills and resources could be publishing some tests themselves. So very different way of working. But full stack testing does open up a whole bunch of new doors.

Rommil Santiago 12:55
This is Rommil. Santiago from experiment nation, every week we share interviews with and conference sessions by our favorite conversion rate optimizers from around the world. So if you liked this video, smash that like button and consider subscribing it helps us a bunch. Now back to the episode.

Charlotte April Bomford 13:09
So the question is, how do you know? Because again, like, I agree with you full stack testing is quite very, like it's a complex thing. How would you tell when the company is ready for full stack testing?

Kenneth Kutyn 13:24
Yeah, a few questions that I would be asking myself. What kind of use cases are important to me? And you know, we don't have to lie to ourselves and say, oh, yeah, I want to do server side testing when you know, if you're a company that makes landing pages, maybe that's all you need to do. But at the point where you're saying, Oh, I wish we could change the search, search algorithm a little bit to try out some different weighing parameters and see which works best. Which experiments are much more complicated? Yeah, yeah, I wish we could try different pricing. I wish we could test into our mobile app, I wish we could, you know, change some logic or some content that lives on a server that's pulled down from mobile apps and the web browser. So that's one is use cases. The second, like I said, it's kind of your team, your team and how you want to actually launch experiments, do you want it to be this layer that sits on top of your website only? Or do you want to be launching features in code. And for product teams, they did actually rather that because they don't have this kind of shadow layer, or something that's here called to is like shadow CMS, where you've got two different systems that are managing what appears on your site. The third is the different channels that you have. So it's different touchpoints outside of the browser. And another one is, is performance as well. So when you're using one of those visual editor solutions with a JavaScript tag, they'll always be some performance hit in terms of that tag has to go and download more resources from another server. There's no way around that with false back testing, whether it's running in the browser or on the back end, you can achieve much closer to zero latency, you're able to do some more of the work upfront. And, and yeah, basically test without having to compromise a little bit of loading time. So that's another reason I see some teams seriously considering switching over.

Charlotte April Bomford 15:20
Yeah, the loading time is always the issue. So you we've touched on earlier, you've mentioned about Google Optimize and GA for sunsetting, almost at the same time. So with the companies moving from goop Google optimized to a new solution, what would they need to ensure that it works well, with with J. Four?

Kenneth Kutyn 15:45
Yeah. So I guess just going back one step, even kind of set maybe one reason for companies to pause in their decision making process to replace Gio is the potential go full stack. The second I would say use that this GA for migration. I'm also seeing some companies that are pausing and saying let's not assume GA for is right for us. They basically are taking a bigger look at what they do for analytics and group experimentation in with that, and trying to figure that out. And then they can as a secondary step, decide which experimentation solution they need. And so if you jump into let's find the best experiment product for us, and then oh, we also need to find that least product, you know, then you've kind of almost maybe done the backwards order that you should. So I think that, um, yeah, if you have indeed evaluated and decided to J four is the right fit for your company, at least for the time being. Google has come out and almost directly said they're not going to develop their own experimentation capability MJ for kind of alluded to that in a couple of blog posts and product updates. So we shouldn't wait around for the new Google Optimize. But yeah, you need to find something that's, that fits with what your team needs. And that's not just the technical features of the tool. It's also the company going to be partnering with, you know, so you can be honest with yourself, how much how much support do you need? Do you need a vendor or partner who can help you strategize tests, who can help you with implementation? Those types of things are a big one. And some companies and some agencies are well suited to get you up and running with a product and kind of offer you ongoing support and that others are much more, you know, well placed to be a great software vendor, but you buy the product, can you kind of figure out how to use it yourself?

Charlotte April Bomford 17:50
That's much more complicated. Yeah,

Kenneth Kutyn 17:53
people are different places in that different levels of maturity. And that's okay.

Charlotte April Bomford 17:57
Yeah. So would you like, are you? So what are you saying? You're saying, like, let's go with an alternative or Google Optimize? And if J four is fine, let's go with that for the meantime, until we find another solution that fits the company's needs, and just just start from there.

Kenneth Kutyn 18:16
Yeah, exactly. I think, you know, what's, what's interesting is, Google obviously comes from a long marketing background. They're interested in selling you ads, that's their business. So that's why Google is always going to focus on how did you get users to your site, which ad campaigns are performing well? How can we help you with your SEO, right, that's the goal of Google Analytics. That being said, GA four and moving more to this event model, it is getting more into what we might call product analytics a little bit. And Adobe just last week, maybe the week before announcing their new product analytic solution. It's really showing that even the big players are starting to recognize the importance of not just acquiring users, but engaging them, keeping them happy, retaining them, making sure they renew at the end of the day, keep using our apps and products. And so shameless plug, you know and we're working in amplitude now we're we're we've been doing product analytics for some time, it's a great validation to see that, you know, there's value and there's a good reason to focus on what happens after we get the user not just how do we get them in the first place. And so yeah, if I was looking at new analytics and new experimentation product right now for a company I worked at, those are the kinds of things I'd be looking at like end to end can I analyze from the customer seeing when my ads to landing on a landing page to signing up for the first time to retaining renewing and upselling cross selling like I've been looking at it and journey and seeing if that fits for me, and if experimentation if I'm able to experiment in each stage along that journey as well. And if I can get that from from one vendor or multiple vendors who offer me the best solution for each piece, so be it. But I'd be trying to kind of look into and like that. And you know that that probably means the marketing team and the product team need to start working a bit more closely together in terms of no longer having to two disparate data sets, two separate sets of tooling for people to learn to sources and truth essentially, and, and all of a sudden having analytics in one place where they can see the whole customer journey and one version of the truth to kind of go off and make strategic company decisions. You know, at the end of the day, that's what that's what analytics experimentation is here to solve for.

Charlotte April Bomford 20:44
Yeah, I agree with the end to end thing, I think that that's more of the most important place where you're going to get the data like how people, where are people going, what, what they're interested in, like, what they do from and like, where they came from, that's actually really, really good. And that's, that's good advice. I have a question, though. So for companies who have capability to create their own solution in house, what do you feel about that? What do you think about that? And is that the rice choice? right choice for some some of the companies?

Kenneth Kutyn 21:20
Yeah, good question. It's, it's a tempting choice. I see lots of companies playing with this idea. A few considerations. I mean, the the short version is, like any software, product, or project, this always comes in a lot more expensive and time consuming than you expect. And if you're Yeah, if you're a Google or Netflix or booking.com, and you've got almost unlimited engineering resources, then maybe maybe it's worth considering. And the reason why you would do this is, you've got to ask yourself, do we have a special need a unique need, in how we experiment that we're not going to get from a vendor and that that no vendor is going to build? Because our business model is so different, right? Does it give you a competitive edge to build your own experimentation product? You know, no companies out there building their own Google Docs, because if you are 90, you're not going to beat it does by having a better word processing tool. So the same thing, if you are if you are a Nike and you want to experiment better? Are you going to get a competitive edge from that? Do you have the engineering resources to build that? Or are you just going to do it to try and save costs, because it's often the opposite. It's true. There are few a few studies around, you know, building custom software that usually it's it's an order of magnitude more expensive than getting something from a vendor. And there's a lot of future proofing you get from a vendor as well. You change your tech stack into yours and start using go instead of Python. Your vendor already has an SDK for go, and they've got 25 customers using it already. Whereas you built your whole experimentation stack on Python. And now you can't migrate or you got to pause experimentation for six months, while you update the whole experimentation stack too. So I certainly wouldn't say you know, yeah, I wouldn't say building is always bad. I would say I've seen a lot of companies build something and underestimate the costs and the time to value. And that the solution you end up with often is often very technical, and serves engineers and analysts. Leaving product and marketing a bit underserved by that in house solution is typically how it ends up.

Charlotte April Bomford 23:38
I'm actually interested about what you said about the cost, right? Because it like for most of the for most of the experimentation, software's out there, when you it's kind of like you're buying a license to use their software. And so that would pretty much like if you if you have a capability in house in house to build a software for like one year's worth of that the other like the external experimentation software, and you don't have to maintain it that much. Wouldn't that be cheaper, instead of like having a much more expensive when you buy somewhere else? Someone else's software?

Kenneth Kutyn 24:20
Yeah, I mean, first of all, you can most of these companies who've built internally have blogged about it. They're using it as ways to attract engineers to work on interesting problems, right. Which actually means that places like Uber and Netflix have publicly stated how many full time equivalents they have, just building and maintaining experimentation tools. And if you look at those, it's not like two it's coming like 50 and 20. Engineers are working full time on maintaining this this product they're building in house. Depends where you are in the world, but 20 full time engineers in the Bay Area at Netflix salaries is is a lot more then most companies spend off the shelf software, even in Amsterdam, but I won't mention any specifics, I was aware of one company who was looking at buying experimentation product, they were quoted around 300,000 euros per year for their volumes. And they decided to build in house. And they had a team of six full time equivalents working for multiple years to get it off the ground. And so it very quickly adds up, right? Yeah, the thing about experimentation as well is perhaps more than any software solution you buy, you can measure ROI from it quite easily. It's, it's, and it's often, you know, a five or 10 to 20x, over what you're at least what you're spending on subscriptions, you should factor in, of course, your own manpower and everything that goes into it. So that's something you know, another piece of advice, I'd give people who are looking to optimize and trying to justify whether it's a build or buy. Can you look back over the last year or two at the experiments, you've run on revenue metrics, the uplift you're seeing? And hopefully use that to, as a case to get some budget? Yeah. So maybe some budget you didn't have before, if you're even on Google Optimize free plan, could be the first time you're going to pay for a subscription for a software product for experimentation. But hopefully, the the wins you've gotten in the past 12 months should more than justify a new subscription costs for you.

Charlotte April Bomford 26:33
Yeah, yeah, I agree. That that is very insightful. I was thinking about the costs earlier. And I was like, probably it might be cheaper. But you know, that's, that makes a lot of sense. Having that overhead and everything. But yeah, I want to know, you know, we're talking about experimentation software, what do you think? Is the worst trend going to? What's the next? Like, what's the next big thing in experimentation software?

Kenneth Kutyn 27:00
Yeah, there are a few trends I see. One we kind of touched on is AI. And there's some interesting applications there, of course, around personalization, within tests around recommendations, helping with interpreting analysis and such. Another one that I that I see happening, I think is really critical is the the software doing a better job of kind of helping you manage your experimentation program. So everyone, all the experimentation software on the market, helps you build and run a test. And usually, there's analysis built into that as well, that's kind of the core of all the products that do that in some form or another. If you if you do an experimentation program at your company, and you're running like two tests a month, and the typical company is getting an experiment winner on 20 to 30% of their experiments. So that's dancing, uplift, you're running for a lot of months with zero ROI, right. And maybe at the end of the year, you have a couple with with a stat sake uplift, and maybe one of those is on revenue. And, and you're you're in the red on your ROI analysis. So really, experimentation works when you as a team are running 510 or more experiments. And that's when you start to trip over the things that you can kind of hack together for a single test. Things like capturing and prioritizing ideas in a repeatable reliable framework, documenting hypotheses, ensure you've got the right, the right backing data and the right metrics, selection. Sharing results with with people in the team, all all of these things that have that need to happen better as you move from one experiment month to 10 experiments month. And so that's a trend they see in software is that there's this focus on how can we help companies with their experimentation programs, not just with the tech side of building and running a test? I think that's really good. Like maintaining best practices, as you scale is something software can help with it's it can't solve entirely, but it can definitely help with

Charlotte April Bomford 29:17
that. That's interesting, when you said like, you know, two tests a month may not be enough of an uplift to see an ROI. What how many tests would you recommend? Would you are you after quality of tests or quantity of tests? I'm just interested in that because I've seen a lot of like podcasts before, like, oh, you know, it's the more the test you run, the more chances of winning kind of thing. And there are people would be like, probably like four tests or quality tests, and you'll see like 200% uplift would be fine. What are your thoughts on it?

Kenneth Kutyn 29:51
Yeah, I don't know that I have a number or even a rule of thumb. I think it comes down to first of all Good prioritization model where you're looking at, you know, what's going to impact how many people make it to this part of this flow that's part of the site or app. So if the test does where it can get to 10%, uplift, what does that mean at the end of the day, and you can counter that with it in an effort estimation, for building the test. So then you've got kind of these two axes, two competing axes to help you prioritize things, and start from the top and work your way down. As you go through various experiments. I've heard some teams who will try and pick a couple, you know, Hail Marys every month and pick a couple simpler, simpler, more likely ones. So hopefully, they're driving incremental improvements, while they try kind of the moon shots in parallel, to kind of have a bit of a balance in terms of their risk reward profile, if you will. So I don't know that there's a, you know, a standard, or a single rule of thumb. And like we said, as more product teams take this over, the testing backlog, and the product backlog become kind of merged. And product management has had methods for you know, prioritization, for a long time, which no experimentation is somewhat benefiting from,

Charlotte April Bomford 31:17
okay. There's a lot of companies like we've said in the start, that are scrambling to look for replacement. There are also a lot of experimentation of experimentation software out in the market. And to be honest, I feel like there's like, I feel like some features overlap each other's features. And they're kind of like almost the same thing, but different brand. So you know, if you're a company, or if you're advising a company, and they have like, like, let's say, five to 10, experimentation software, what's the first thing that we need to look at, or a company needs to look at before they decide on an experimentation software?

Kenneth Kutyn 32:06
Yeah, I'd start with a self assessment of where you are, as a company and team for your culture and appetite for experimentation. First, to assess how much help do we need? Have we been running 10 tests a month for the past two years, independently, and ready continue with that, versus we have an agency who builds half our tests, and we have our software vendor on site once a month delivering training sessions, and you know, we've hired 16, PMS in the last three months and nothing of experimentation knowledge, that's going to be played a big role in what do you expect from your vendor, your partner, your agency you work with? In terms of who can you work with? Then yeah, look at you know, what are our most important touch points for our customer? Is it all browser based? Are we really mobile? First? We have a lot of infrastructure experiments and back end business logic, engineering logic, we want to experiment on that will influence as well. Where did the analytics live? Keep coming back to that, right? Have we standardized on? You know, what the big analytics players are using more focus best of breed solutions, something in between? Do we expect to do the experiment analysis in the experiment tool? Or do we expect to do the experiment analysis? In our analytics product, or maybe something more manual in the house that will influence as well we start to think about how easily can we get data out of this tool? How reliable is that data is a single source of truth? So do we have kind of discrepancies now? Another one is even like, how are we going to target experiments, if you're if you're doing a lot of top of the funnel experimentation, right landing pages, maybe they're all general tests. But if you're testing in the product, you've got different plans in different regions, you support different types of user personas, you want to do targeting based off that you might not have all that user context in experimentation products that might live in a CVP and your data warehouse and your analytic solution. And you've got to start to think about can these things work together in a reliable way and in a performant? Way, where why whereby I can say, Okay, I want to target an experiment to people who have made four purchases in the last month and wrong, my free plan. You know, what would that look like? And is that something I could do again, at scale? Or is that going to be a lot of a lot of work for an engineer to hack together? And I'm only going to do it once a year.

Charlotte April Bomford 34:44
So, what are the features that you usually like look at in terms of the experimentation software, because I feel like you have like this pre you know, there are software's where you can prioritize tests now where you can write a Almost like a Trello thing that is already integrated. And then you have like after the experimentation, they can also integrate like the heat maps and all the other things that you need post test analysis. I feel like a lot of experimentation software, aside from those things, is there anything else that you might be able to like add to the Okay, aside from these? What are the other factors that can be like a possibility to look at? In terms of like choosing a software or an experimentation program?

Kenneth Kutyn 35:35
Yeah, maybe a couple others to keep in mind will be things like what kind of technologies are using the front end, some solutions are better than others, if we're just talking about the visual underground for a moment, some are better than others at handling dynamic content, like React or Angular or index js site. That can be a struggle for for visual editors, and some handle it better than others. On the statistics side, these solutions, summer using quite similar stats engines, Bayesian inference model is quite common these days. Others use something a little less common, like sequential testing. And, you know, it gets technical quickly gets over my head quickly. I don't mind admitting that. I always tell companies like look, if this is, if you're going to be making business decisions like this, don't assume they're all the same. And don't assume it's good enough. Bring in a data scientist on your team, bring in one of your statisticians or someone who can, you know, look at the tech documentation of this stats model. And see if you like it, see if you think it's reliable, if it will work for you. So So yeah, it gets down to like the program management functionality. Support for your specific technologies, statistics engines you can work with targeting you can work with. And then the big one is is just integrations, will it connect well to your analytics and play nicely there? If you're doing the targeting, where does that data live? And can you get them to experimentation product? Those types of things as well.

Charlotte April Bomford 37:10
As awesome. Last question, probably, you also wrote, you wrote on your thesis on experimentation, which is, you know, interesting, because I don't think I've met anyone who wrote about experimentation so far. What have you learned from your thesis?

Kenneth Kutyn 37:30
Yeah, my main goal was really to try and unpack this link that we've all accepted. Everyone agrees experimentation makes you innovative. And I thought, Okay, let's take a step back and see, you know, a, is there actual evidence that this is true? And B, if so, why? Why might that be the case. And the first challenge you encounter is, well, no one can agree what innovation actually means right there, there are a few kinds of soft definitions that get thrown around about coming up with a new product, solve customers delivers value for you, delivers customer to lose value for the customer. Something novel that hasn't been done before, et cetera, et cetera, et cetera, right? That's great. But if we're going to try and measure this, we need, we need a definition, we can use a numbers. And there are a few different ideas there as well. Like Boston Consulting Group defines it as the amount of revenue that you've obtained from new products, taking into account the spend on new product development and your revenue for the rest of your portfolio and those types of things. You can look at things like patents, which aren't aren't a great metric, but they say something, at least you prioritize IP and getting new things to market. So a few different ways to measure innovation. And then you got to look at okay, so how much our company is actually experimenting. And for that he pulled a lot of data, they found a database of about 10, or 15 million job postings from the last five years or so. I scraped it for certain keywords. And in order to benchmark and one to look not just at experimentation, a B testing, but things like heat mapping, session recording, analytics in general. And then some other kind of practices that we see from product teams, like roadmapping, prototyping, agile, all these kinds of keywords. And I took that job posting data, I did the same thing with a bunch of LinkedIn profiles.

Charlotte April Bomford 39:31
Sounds like a lot of work.

Kenneth Kutyn 39:34
Too bad at least there's some tools out there to help and tried to figure out which companies were actually experimenting the most. And then what was interesting is, is yeah, I found that of the different practices. So you know, session recording, experimentation, prototyping, blah, blah, blah. Experimentation did have the strongest correlation with innovation, based on a couple of definitions. So so it really seems like yeah, there is this. Again, causation is tough correlation is a bit easier. There's definitely a correlation with innovative output and enterprise software development companies and their propensity to experiment. That's interesting. Yeah. And then you got to like, happily share the thesis, but it's a bit long and dry. But they you can start to get into Okay, well, then, why might this be? And it keeps coming down to the culture, right? What what? How does it company view risk? How do they view new ideas? Where do they think new ideas should come from? And how willing are they to try out new things and give them give them a chance? Give them a day in the sun and see if they stick and drive value? That's kind of what the theory keep taking it back to really understanding customers at the core of it.

Charlotte April Bomford 40:55
Yeah, it's really interesting innovation is is, you know, through experimentation, you get innovation, that's actually very, very interesting. Because you won't know anything, unless you experiment, right, you won't be able to find a different solution, if you don't know that the problem exists. So it's very interesting that you said that I'm actually really, I would say, blessed and lucky that the team that I am in their very experiment, focus. But I would be interested, like, what would you say to companies who are able to do experimentation that they're not doing that?

Kenneth Kutyn 41:40
Companies who are able to experiment but aren't yet what would be kind of advice for them?

Charlotte April Bomford 41:44
Yeah, exactly. What would you advise to companies like, hey, get into experimentation? Because you might, you know, see or find issues that doesn't you think that doesn't exist? And might help you in the future? What would you advise to those companies? Or? Yeah,

Kenneth Kutyn 42:04
maybe maybe the most convincing argument you can make is that their competitors are probably doing it. But yeah, I think, you know, at the core of this, why do we do analytics? Why do experiment, session recordings, we've all kind of acknowledged that the closer we can get to our customers, the better products we can build? I mean, that's, if I could summarize agile in one sentence, it would be that right? Like, how much feedback can we get? How quickly can we iterate on it, put it in our product as a feedback loop and get it in front of customers again, just like get in front of customers as frequently as we possibly can? Hopefully, that's not too controversial of a thought. And I see it as really natural. progression from that thought is what's the quickest and easiest way to get ideas in front of customers and see if they like them? It's experimentation. It's the only way you can do that so quickly, and at scale, and statistically robust way. So I don't know if that would help convince anyone, but that's kind of the at least the theoretical argument I would use to try and encourage some appetite to experiment more.

Charlotte April Bomford 43:12
You You probably had them as well, with the competitors, your competitors are probably.

Kenneth Kutyn 43:20
If Netflix and Facebook are in there, everyone will be competing with them in some way or another, so

Charlotte April Bomford 43:26
Exactly, exactly. Awesome. Well, Ken, it's been amazing to have you. I would like to thank you on behalf of experimentation, and thank you for the insights. It's been really good. Yeah, that's it.

Kenneth Kutyn 43:40
Great. Thanks so much for having me.


If you liked this post, signup for Experiment Nation's newsletter to receive more great interviews like this, memes, editorials, and conference sessions in your inbox: https://bit.ly/3HOKCTK