AI-Generated Summary
In an insightful interview at Experiment Nation’s Conference, Bryce York, Director of Product Management at Tatari, delves into several key insights. He explores the parallels and distinctions between CRO and product management, especially in smaller companies where overlapping skills are crucial. Bryce shares his personal journey from entrepreneurship to product management, underscoring the significance of both qualitative and quantitative data in experiment design. Additionally, he emphasizes the importance of documentation and knowledge sharing in cultivating an experiment-driven culture.
Audio
AI-Generated Transcript
Bryce York 0:00
Yeah, I think I think CRO and product. They have a lot of similarities, but then they’re also quite different in other ways. I think there’s a lot of overlapping skills, a lot of overlapping work. In many cases, you’ll find that there’s there’s somebody that’s doing CRO or there’s somebody there’s doing product management. In small companies, you often don’t have somebody doing both.
Rommil Santiago 0:19
Hi, my name is Rommil Santiago and I’m the founder of experiment Nation. Today we have Bryce York, the Director of Product Management at Tatari. Experiment Nation’s Richard Joe recently spoke to Bryce about overlapping skills in CRO and product management, quantitative and qualitative data and the natural loop of zero work, choosing between being smart and getting lucky. And finally, the importance of documentation. We hope you enjoyed the episode. Let’s get to it.
Richard Joe 0:47
It’s Richard from the Experiment Nation Podcast. Today, I’ve got Bryce York, on the phone. Bryce York from New York. I just made that up now. But look, it’s Bryce originally from was it Melbourne or Sydney?
Bryce York 1:06
Sydney? Sydney?
Richard Joe 1:08
Awesome. And you’re currently Head of Product Development? I believe it Tatari, which is a advertising analytics platform. Is that correct?
Bryce York 1:19
It is for TV. Yeah. Happy to I’ll share a little bit more about that. When we kind of get into stuff but yeah, that’s what I’m, that’s where I’m at now. So yeah, originally hailing from Sydney, Australia, but now based in in New York City. I’ve been here for coming up on on five years now. So it’s, it’s nice to hear a familiar accent and chat to somebody that’s from from the other side of the world back where I’m
Richard Joe 1:41
from. Yeah, totally, like two young Kiwi person as Ozzie. But I think that my accent softened a little bit. Cool. So look, man, you got a very interesting history and background. And it is a very entrepreneurial background, I think, in having you on the show, be good to discuss. Maybe some of your background like what did you study? What did you how did you get into product management and experimentation and to grow from those sort of things? Can you can you kind of go through your background a bit? Yeah,
Bryce York 2:17
sure. I’d be happy to tell story a little bit. It’s, it’s great to be here. Thanks for having me. It’s been kind of good. teeing this up. So working with through those timezones being on opposite sides of the world, I’m glad we were able to make it happen. So education wise, I went to went to uni studying business degree. Majored in marketing was sort of my focus there. I would say, I didn’t learn that much about marketing, kind of learned more about general business kind of concepts, and those sorts of sorts of things. From there, kind of got out into the, into the business world, through university kind of meeting into the through the entrepreneurship society, we’re talking about this actually, before we went on the on the record, and turned on the recording, I met a guy who was who was running kind of a entrepreneurship education company, I kind of always had this entrepreneurial streak, I got kind of pushed in and asked to read Rich Dad Poor Dad, when I was about 15. And then after getting nagged about it, by my old man for two years, I eventually did read it, and then was like, hey, there’s actually really interesting, I don’t have to have a job for my whole life. I kind of was hooked on this entrepreneurship thing, like actually, it’s not just owning a job, you can actually own a business and get leverage through that. And so I kind of had this fascination and everything there. And timing wise, it was very, like, kind of on the on the money. So I met this guy who was kind of trying to build a school for entrepreneurship, where can you learn this because universities weren’t really doing it. So that kind of kickstart in my career ended up at that company, growing through the ranks, started out through just volunteering at events and actually ended up being CEO of the company. And it was a very conversion focused business. It was all about like, kind of driving leads through free materials and videos, then building that into an email newsletter and kind of progressing through a ladder of of offers that ended up kind of growing into like a monthly mastermind in person in kind of two of the main cities in Australia. So it was a really great experience. And I kind of learned a lot about what I knew and what I didn’t know. And kind of being around that and making it work was such an awesome learning experience. And that was where I really kind of got interested in growth and experimentation. I didn’t really know what product management was yet that kind of gets figured out a little bit later in the story. So it’s kind of into that and started studying some of the old internet marketing stuff. You know, being on Warrior Forum and firing, guns, materials, all that sort of stuff, to kind of went down that rabbit hole of really copywriting and that sort of thing and being fascinated by it and you can test things out, run an A B test and make more money and be able actually measure that. And I found that sort of fascinating. So I was there for a while. Then after I kind of had seen that through, I went out and started my own little mini agency doing kind of exactly that stuff focused on ecommerce businesses, so got in really early into the Shopify space. This is around 2010 2009, and sort of got into it around them. And that’s
Richard Joe 5:24
really interesting. Because not no one was sorry to interrupt, no one was doing that at a time. Not many people was really specialized in Shopify like it, it just Yeah,
Bryce York 5:32
it was, they weren’t the winner yet, it was still a lot of people were on WooCommerce. And working through all those sorts of pieces. So there’s a little bit of that in it. But I was, I really liked what Shopify was bringing into the table and making just a better solution for everybody. And so kind of leaned into that, and was really focused on building out stores that were really conversion focus. So it’s kind of web dev agency focused on conversion. So that was really where I got deep into it and obsessed with it. So building out and running tests and experiments in AB testing, and all that sort of stuff for clients as a service. And so sort of did that for a couple of years. And that was really a foray into that space. And then kind of also learned how to code just to be able to kind of solve stuff myself, working with offshore engineers, all those sorts of things. And then then I joined a company as employee number one company called red frame. Our focus here was around teaching enterprise companies how to operate like startups, how to take a more experiment driven approach, not assuming you know, the answer to everything, design, thinking, Lean Startup, all that sort of stuff. So we filled out this education Learn by Doing platform that was really focused around that. That’s kind of where I got my first foray into machine learning and AI, sort of taking what is an expert do, and then turning that into a system that could do it for you adding leverage there. And then actually, speaking of Warrior Forum, around that same time, when I was finishing up there, it got acquired by a freelancer.com. And actually interviewed for the CEO role and ended up deciding it wasn’t quite the right fit. But that was almost a path that I took of heading that company, which was called to go from just some guy reading it that didn’t know anything about anything to having that potential there. So it’s funny how things go, full circle. And it was actually at that company red frame, where I discovered what a product manager was, because we had an intern that we were working with, and well, you are awesome. You’re a such a smart guy. You know your stuff. We’d love to hire you as soon as you finish school. What job do we have to create for you to be here? He’s like, Oh, I want to be a product manager. Do you think that you could make that work? And I’m like, Oh, let me Google Product Manager. What I’ve been doing for like, three years. Cool. I guess I’m the head of product, then as opposed to just a guy.
Richard Joe 7:46
So you were kind of kind of doing it. You feel like, yeah, like looking back? Like, you were just intuitively doing these things that a product manager does or, and but you didn’t have that title yet. And it wasn’t like a thing that was formalized in your brain, like all studied product management. But looking back, did you feel like you’ve had to go through, I don’t know any courses or read through some books to like refine your knowledge about product management, because you know, you can go for the intuitive past intuitive pathway and just sort of do it without the title, so to speak. But do you feel like you’ve had to have many, maybe mentorship or books or whatever?
Bryce York 8:31
Yeah, I think those things all go a long way. At the time, product management wasn’t something you even could study formally, there weren’t product management schools, and you certainly couldn’t major in it at university today, that is an option, which is, which is really cool. A lot of that is similar to, like I mentioned, the marketing degree, like it didn’t make me a better marketer, but it enabled me to learn how to be a market leader. And so I think a lot of those product courses are in that same vein, where they teach you the ways of thinking and the ways of approaching but it’s not necessarily as pragmatic as actual practice. I think actually, one of the best ways to learn how to be a product manager is try and build a product for yourself, like a little side project doesn’t matter if it actually makes money. Ideally, you try and do that, because that’s just like an extra barrier. But we will use no code tools that are out there today, you can really actually build something. And if your car, you’re in a pre great position to actually go out and market it, which is the part that most people with a side project most people that are building a product, a terrible app, because they just like, if I build something awesome, then people will show up. But for me, books were such a big part of it. There’s so many great books in the product space. And entrepreneurship is what you want to sit around that one of the best ones for building a product from scratch, running lean. It’s kind of like everybody knows the Lean Startup, but I think of that as kind of the textbook, which is not for everybody and is very heavy rich with information. Should I think of running lean is the recipe book of like, follow the bouncing ball, Do this, do this, do this. And Steve Blanks. I’ve actually got a copy just over here somewhere slightly out of arm’s reach. But But Steve Blanks, startup owners manual is also an incredible resource. So great place to start.
Richard Joe 10:20
Yeah. Steve Blank. I think I remember reading his similar book, maybe 10 years ago. Yeah, it was really, it’s really, it’s very applicable. And I mean, a lot of the sort of, like, you see a lot of the same things we’re doing in CRO world when it comes to looking at qualitative and quantitative data and prioritizing, and all that sort of stuff. So now you’re at what?
Bryce York 10:50
Yes, yeah. I lost. Yes, I lost myself on my, on my backstory a little bit bad. But yeah, from from there, I ended up at a company called find a site that helps people make better decisions comparison website. And they’re very CRO SEO driven company. So went in there and focused on growth, product management, worked in their global team. And then I ended up transferring with them to New York, I wanted to move to New York for years just for the life experience of being an expat and exploring that, and that ended up working out really nicely with that role. And then, once that time, kind of came to an end, that was how I ended up at my current company, I met Tara, which is a ad tech company focused on buying and measuring TV advertising, and turning that into a performance marketing channel, which is something that’s really been made possible by this company, which is really cool. So you can actually get a row as for your TV ads, and it doesn’t cost a million dollars to get started. It’s more like $5,000, which is pretty incredible. So unlocks a whole new channel, once you start to hit the ceiling, on on ecommerce, and the volume that you can get and the scale you can get out of that certainly unlock some pretty cool experimentation opportunities. For the CRO, folks for sure. Yeah,
Richard Joe 12:07
we’ll get into that shortly. Maybe. And thanks for keeping it to experimentation framework, of course, like let’s put it this way, like, you know, your product manager. A lot of our folks if you’re one of our folks, there are some product managers in our in our audience as well. Who do do like experimentation and growth. Maybe you can talk about certain things like
Like, like, you know, like pure CRO and product management, experience experimentation, maybe talk about the differences between the two and maybe talk about like, when product managers try to get into an experimentation sort of process, what come what common mistakes they make and what kind of mindset they need to have. Because there was a there is a you know, there are differences like it’s sort of same time, but maybe it’s the weather Yeah, absolutely.
Rommil Santiago 13:06
This is Romo Santiago from experiment nation. Every week we share interviews with and conference sessions by our favorite conversion rate optimizers from around the world. So if you liked this video, smash that like button and consider subscribing, it helps us a bunch. Now back to the episode, I’ve
Bryce York 13:19
almost entirely been at companies under 350 people ranging from three to 350. over the decade plus of kind of doing this stuff. And in many cases, you’ll find that there’s, there’s somebody that’s doing CRO or there’s somebody that’s doing product management, in small companies, you often don’t have somebody doing both. And so you kind of end up backfilling for one another, if you don’t have that. So I think that’s one of the really interesting things that you’ve got to look at is, if you want to get into one or the other, like if you’re a product manager wanting to get into CRO, your CRO wanting to get into product, finding a small company where they’re not separate roles is a really great way to sort of transition between the two. So a lot of the stuff that’s similar is looking at prioritization. You’ve got myriad ideas, you want to come up with those ideas, think them through, and then sort of figure out upfront, which ones are most likely to have an impact, making sure you’re making big swings, but they’re also realistic, like you don’t want to set out on some quest that has very, very low chance of panning out because the opportunity cost is so high. Places where it’s different, though, I think is really in the altitude that you’re operating at. And the resources that you have at hand. When you’re a product manager, you’ve generally got a designer that you’re working with, and a number of engineers and more and more today and as we go forward working with data scientists and machine learning engineers as well. And so you’ve got a lot of firepower that you can direct an aim, which is both a really cool opportunity but also a lot of pressure because there’s not just your time like if you spend your To time working on something, and it doesn’t work out, when you’ve got a team of 5678 10 people, even as an individual contributor, you really got to make those things count and be agile and ready to go. And so I think that’s also something that’s a difference between how experimentation fits in. I think, as a CRO, most of the time you’re looking for experiments that themselves deliver the outcome you’re looking for. You want to run a test, and see an upshift and conversion. And that’s exactly what you’re looking for. That’s the outcome itself. Most of the time in product management, you’re not actually getting to do that it’s a step along the way. Most of the time, the experiment is about validating some assumption that you have along that way. Whether it’s, do people actually want this? Is this actually viable? Are we going to be able to get more value out than we give? Are we going to be able to give enough value to the user? And is it even feasible? Is this actually thing actually gonna work? So you kind of got that piece as well, where your actual deliverable isn’t the experiment itself, the experiments just about reducing risk, and kind of helping guide you along the path. I don’t remember the exact number of the stat but any space shuttle is on its way, somewhere is actually off course, the vast majority of the time. But because it’s self correcting every few seconds, it’s like a little bit to the east, a little bit to the west and correcting along the way you end up at the right place in the vastness of space. I think that’s one of the key things of experimentation. And product management is where you’re doing exactly that you’ve got this general direction that you’re trying to get to. And you’re using experiments and testing along the way in order to kind of go Oh, actually, not that way. That didn’t quite work.
Richard Joe 16:41
Would you would you still have like, you know, they talked about Northstar metrics and see at zero like, you know, we’ve got a Northstar metric of r&r. Sites a common, you know, what just make it up like, you know, increasing average order value by X percent or whatever. Did you do you still follow like a sort of framework of having this Northstar metric and in products or growth? And absolutely, yeah, how does that work out?
Bryce York 17:14
Yeah, in production growth is very much the same thing, often the same metric as well. And so we’re kind of working in that same direction, just kind of bringing different tools. And in an ideal world, you have a bit of both, and depending on where it is in the funnel is kind of the tools that you have available there. So yeah, definitely a commonality there working towards that metric. And especially depending on the type of company you’re in as well, that’s one of the big variant factors in product management is that it can really vary a lot between the company stage or the type of business that you’re at. So when you’re at a smaller company, this is going to be less data to work with. And then if you’re in a b2b company versus b2c, especially if it’s enterprise, not like b2b, individual users, SAS where it’s like, you get everybody in the company to use it, like Slack isn’t exactly lacking for data. But if you’ve got a product that you’re selling in, that’s only used by three people in the company, then your volume of data is going to be very different. And that changes the nature of the types of experiments you need to run, you’ve really got to take bigger swings in order to even be able to measure what’s coming out. So you got to design your experiments to be proportional to the expected impact. So the amount of data you’re gonna get more data means you don’t have to have as much impact, but less data means you really got to blow things out of the park. So a lot easier to get to a significant result if you’re taxing the metric. But taxing the metric isn’t always easy.
Richard Joe 18:48
Yeah, I mean, you always say like those radios, like articles about, you know, like, crazy experiments that, you know, 10x or 100x, or whatever, or, um, boys sort of but cautious about those sort of experiments, because they don’t talk about all the failures and all those sort of things, you know, maybe maybe, maybe, maybe go through the process of how you go about looking at qualitative and quantitative data and develop an AI sort of hypothesis and testing framework in your in your role. Is that, was that okay? Yeah,
Bryce York 19:23
absolutely. So I think so my current role is, is a company that is at the at the coalface of that data versus impact kind of challenge is where company where our client, if they’re really happy, they might only still have two people in their business actually using the product. And so with that makes scale harder in terms of like user experience type experiments. But then when you look at the media spend that we’re working with two people putting actual media dollars through the system. We have hundreds of millions of dollars in that case over the lifetime of the company, billions of dollars. And we have all that historical data. So you have a different type of experiment that you can run, it’s more about the impact of actual behaviors, and strategies and methods for buying and measuring advertising. So there’s always a different place to look to find where the data is, and you can focus your optimization in that area, I find. The other dimension that comes into it is quantitative and qualitative, you sort of touched on this a little bit earlier. I like to think of those as like a loop that feeds into one another. So looking at your qualitative aspect is really feeding you ideas and turning those into hypotheses. So whether that’s customer interviews, talking to internal proxy users, or proxy user being they’re not your actual end user, but they know your end user really well. Really good for adding extra data. So if you don’t have the bandwidth, or when you’re early stage, you don’t have the number of clients to go and talk to 12 people, you can talk to three or four people and then talk to people that know that customer really well. So your customer success teams, those sorts of things. So you can work with those in order to generate those ideas take that divergent thinking, Oh, we think this. So we believe that such and such, will result in such and such, therefore such and such. So really kind of building out that hypothesis, and then turning that into the quantitative experiment, and kind of putting that in. So process wise, it’s really focused on generating those ideas prioritizing between those ideas, then you get down to the actual hypothesis definition, experiment design. And there’s lots of really interesting experiments that you can run in a, in a more product related context, one of my favorites that doesn’t get used as much as I think it could is the red door test, or otherwise called fake features, contain and pay to see if anyone comes a knockin. Really good way to learn without actually having to build it out, which I think is a common trap that a product manager can fall into, that I think CROs are much better at avoiding is, you don’t have to build it to find out if it’s going to work. So sometimes you can learn things before you go and do all that work of building out a feature to figure out if people actually want it, people are actually interested in it. And it’s best practice to do that. When you’ve got a whole bunch of engineers at your hands, it’s very tempting to go and build a bunch of stuff before you know that it’s needed. So that’s a big factor for a good product manager versus a great product manager.
Richard Joe 22:39
Would you say that? Some that in this case, like for the product managers that are listening that, you know, they’ve got an idea to build these features and whatnot, and they’ve got an army of engineers, like, you’d recommend paying the door test is sort of form of risk mitigation, right? Like you’re, you’re you’re, if you’re putting this feature or fake button somewhere. You’re sort of using that to gauge user intent and is in a sense, yes. And how do you how do you sort of, sort of distinguish between like, how do we say user intent versus I don’t know. Let’s say you do Panadol tests and you like, Well, we found that X amount of users clicked on this fake feature. Went to this landing page, it said it’s under development or whatever, like, how do you sort of determine between intent versus our well? Well, they clicked on it. This is okay. Well, they’re just curious. Yeah. They might have just been like, oh, there’s a big button. Yeah, that’s a read. And they’ve done I mean, do you still have to still read between the lines? Definitely.
Bryce York 23:47
Yes, absolutely. I think that’s where that the quantitative qualitative piece where I said, it’s a loop. That’s exactly another loop of the iteration. So okay, people are actually clicking on this. Therefore, let’s talk to the people that are clicking on it, instead of trying to talk to everybody, you can send an inept message to those people email those people get on a call with those, those people. But then you’ve also got to think about the flip side of if they’re not clicking on it, is that because they don’t want the feature? Or they’re not clicking on it? Because you did a really bad job of labeling it? Or maybe they’re not good. Yeah. So that’s one of the things you’ve always got to be thinking about is like, what could be causing my conclusion to be incorrect? And what could be causing it to be to seem correct when it’s actually actually not?
Richard Joe 24:33
What do you call it a loop as opposed to? Yeah, like, what do you what do you call it the quantitative qualitative loop? Like what’s what’s, what’s the Yeah. What’s the sort of feedback process involved in that?
Bryce York 24:46
Yeah, I think the key thing is that it’s like a circle. It doesn’t end. I think that’s a key part of it, is that it just kind of keeps going and feeds into itself like you learn something qualitatively. You And then you want to get sure about that. So you want to layer in a quantitative piece. But then that quantitative piece probably reveals something else that you want to explore more qualitatively in order to generate further future ideas. So I like that it reinforces this idea of it being a continuous process, continuous improvement, and not something that’s like, Oh, you do research, qualitative research. And then you design a quantitative split test. And then yeah, and that’s it. And then you go and do that with something else. It’s like kind of focusing on how you want to feed things into themselves. You want hunches to come out from your various analysis and ideas. So you want to be able to kind of intertwine it together as opposed to or what I think is a common trap, where you’re just kind of throwing out all different things across the board that just aren’t connected to each other. Because not only do you want to have a Northstar metric, like we’re talking about a few minutes ago, thank you wanna have a Northstar vision, you want to know what you’re working towards in the bigger picture? And informing your process as you move towards that? So your minimum viable experiment? How can you validate the things that help you understand if that’s the right thing to be building, but also understanding whether that’s even the right thing? Because it’s going to change your mind along the way, and be able to adjust that? Because if you’re just thinking about one foot in front of the other, you could easily walk off a
Richard Joe 26:23
cliff. Mm hmm. It’s a least Sonic segue. You talked about in your your, your medium article, your Minimal Viable experiment versus minimum viable product, maybe you could discuss it for audiences, and what’s the differences and those sort of things?
Bryce York 26:42
Yeah, absolutely. I think I think the minimum viable experiment is something that probably is very intuitive to a CRO, or CRO, that turns into a product manager. And for a product manager that comes from a different background. Because very rarely does somebody in product management come straight from that. It’s a it’s a hard learning curve, if you go straight into it with no other work experience. It’s easy to miss. So startups in lean startup and product management talk so much about the MVP of creating the minimum viable product. And that is really great advice. But making a product is a lot of work, take a lot of time to make a product. And so the first thing you do in order to validate your idea is to build something. There’s a lot of people that will probably be offended by me saying it. But that’s probably not the right idea. You want to have more certainty than just I think this is a good idea. And I don’t think that any product manager or entrepreneur or founder worth their salt is doing that. Or he’s really tempted to do nothing. But the minimum viable experiment is a framing to think about it in a more structured way. So rather than it just being a hunch, it’s how can I design an experiment that confirms that this is the right thing to build. And so there’s lots of different ways to approach that. But it’s really looking at it through that that lens. So recent, recent example of that product that we were we were building at my current company, was looking at a recommendations tool. And in this case, we were actually building it because we’d already run the experiments to figure out the people wanted recommendations, because there’s 24,000 things to choose from in our catalog of inventory. And then there’s myriad ways to tweak and adjust those and configure them to get them to exactly what you want. So you can almost imagine it as like, basically infinite possibilities, which is exciting, but also overwhelming. So you want some way to be able to narrow that down. And so the way that we designed our experiment to figure out what we were building, and whether it was going to meet the needs of the customer, was we took the same question of in this context as this advertiser. What inventory would you test? And we asked that to expert media buyers with a decade or more worth of TV media buying experience, and ask them to answer that question. And then we had algorithm answer the same question. And then we measured the difference between those two in order to figure out how similar was the expert, and how similar was the robot and using that in order to measure it. So that was the experiment, rather than the much more complicated experiment, which comes afterwards, you ship that feature out and measure, okay, if somebody uses these recommendations, how does their performance compared to somebody who didn’t? That’s there’s a lot of variables in that and a lot of noise to try and hold constant. And so that’s really tricky. And it’s also you kind of gotta let the horse out out of the gates before you can even find out whether it’s a good idea. And so sort of more creative experiment design is something to look at and where quantitative and qualitative can kind of work together can be really handy because you didn’t we didn’t run an A B test. We just asked people what do you think And then compare that to what is the machine thing. So
Richard Joe 30:04
that’s that’s really interesting. Do you ever do these sort of things concurrently though? Like it’s is it a, it’s an Neisseria is it does have to be either either or thinking or can you sort of know. And
Bryce York 30:15
really, I think, I think something that I learned in my career is that almost nothing is either or almost nothing is black or white, there’s always a way to do both a way to consider both. And so you want to balance between focus. But you also want to not do everything in this waterfall linear fashion where you do one thing after the other. So I think that’s where prioritization comes in as key. And then looking at how much can you do at once before hitting like a point of diminishing return. So you don’t want to have so much work in progress that nothing gets done, because then you don’t get the benefit of it, you want to front load, value delivery says that you’re actually finishing something and getting that out there and in the hands of users. But also, if you get 20 people into a kitchen to make a pizza, it’s not going to happen 20 times faster than if you just have one person in there. And in most cases in technology, the number is not one. But it’s not all that far off it it’s usually two or three people is really the sweet spot, working on a given thing, depending on the size of it, of course. So you kind of can have a few of those in a team. So if you got a team of six or nine people, you can have two or three things on the on the burner at once.
Richard Joe 31:25
You talk about also I think we’ve I think you’ve mentioned this before, no, no, sorry. We’re going to skip the show notes. Prioritization, identifying your riskiest assumption. Maybe talk about there for a little bit. Yeah.
Bryce York 31:41
Yeah. So when it comes to prioritization, impact is often a big part of decision making in any sort of prioritization, like what’s really going to move the needle? And then you want to look at risk as well? How much? How likely are we to achieve that impact. But in the product world, you also have to think about risk in terms of the chances that you’re wrong. So you got to look at your assumptions and map those out, which is like a whole, a whole process. But you can also just think about, where might I be wrong? What do I think I know that I don’t, and then you want to look at the consequences of being wrong. And so you want to dig into those and design experiments in order to in order to validate those. So the idea of like, there’s two, two types of risk, you’ve got things that you can know and things that you can’t know. And so the things that you can’t know, you want to focus on, just like getting it out there and know in advance this is, you just want to get it out there. You want to ship early, reduce the time to actually figuring it out, and then reduce the surface area as well. Maybe don’t ship it to everybody, maybe ship it as a beta, and learn that way. And then the things that you can learn in advance through experimentation, you want to be able to like set up those experiments, but focus on the things that that matter. So if you’re building out a whole new feature for how to do retargeting on TV, for example, if you’re building that out, maybe you don’t want to worry about the color of the button and whether it should be on the left or the right, and building a split test around that. You want to focus more on. Okay, how much can a given advertiser spend on this? And is it enough where they’re going to bother? Because if they’re going to only spend $100 a week, what use is that, but if they can spend $10,000 a week in order to generate a 5x Rose, then that’s a much more exciting possibility. And you can test that out. Before you build out the feature and capability, you can do it manually. Do
Richard Joe 33:37
you have to like sort of, sort of, you know, when you’re looking at your, like, experiments finishing, you’re looking at doing some post test analysis? Do you have to like sort of, sort of distinguish between signal and signal and noise? In this area, you have to study often I’m just the front? Because I’m thinking like, well, you know, yeah,
Bryce York 34:00
how do you know for sure. And I think in a lot of these sorts of cases, it’s even, it’s even harder than in a CRO world where you can hold a lot of those keep things constant with volume. But when you’re running a more qualitative test, it certainly gets gets harder. That’s where I think the test framework really comes in is really important. Making sure that your experiment design has these things figured out upfront, I think one of the most important aspects is your exit criteria. When is your test done? And not just when is it a winner? But also, when is it a loser? Because one of the things that you can often do is I will just just keep it running, eventually, we’ll get more data and feel more confident. So making sure you set yourself up for that before the data starts coming in. And you kind of negotiate yourself into whatever you have your own predilection for whether you want the test to win or want it to fail, kind of changing your criteria to match. So that’s something that you definitely want to keep in mind.
Richard Joe 34:58
Um, I think we Also, you also discussed about getting smart versus getting lucky. Maybe you
Bryce York 35:07
can, yes, that’s a yes, getting smart versus getting lucky. So I think that’s a trade off that you really have in your experiment design. So getting lucky is where you are sort of throwing things against the wall. And then if the experiment turns out to work, if a wins, or C wins, then great, you run with C, and you’re making money out of that. But did you learn something that you can apply in other contexts, and that’s sort of the trade off of getting smart versus getting lucky. So if you want to take a really big swing, you can change everything, redesigned the whole page. And then you can compare those two layouts and see which one wins. That’s a great getting lucky test. But getting smart tests would be either running it as a multivariate experiment says that you can see which individual components or doing it incrementally when you change one piece at a time, change the above the fold, change the headline, change the layout of the page. And so that’s a lever that you’ve got when you’re designing, designing your experiments to decide which way do you want to go, because getting smart might sound like it’s always the answer. But sometimes getting lucky also means getting a nice or a why that funds you to push a little hot or in a different project in a different direction. So like all things, it’s, it’s in the gray area, it’s about balance. So I like to think of it that way as as taking experiments in both directions, because sometimes you have a really strong hunch that this is the new way to go. But if you go incrementally, piece by piece, it can take you a long time to get there. And you can always double back, you can take big getting lucky shot, take the big change. And then once that lands, and you figure it out, if it works, then you can ratchet it back and take a more getting smart approach along the way. And and maybe only do that with with part of the traffic. So you’re not you’re losing out on on the upside. They
Richard Joe 37:01
feel like the sort of balance between that is really resource constraints as well resource and maybe time constraints, whereas so if you’ve got low resources, maybe there’s political pressure on you to get some sort of results you might have to go for though. I’m just making so getting lucky approach because, yeah, you’ve got to get a big result. And maybe you could argue that the more scientific get smart approach is probably a bit more technically be it in terms of isolating variables and what’s moving the needle, but it may not have the time or budget or resource constraints to do that iterative approach. Would you say that? Absolutely,
Bryce York 37:42
yeah, you’re looking at, you’re looking at to two major constraints, your resources in terms of time and effort to be able to build out and analyze and manage the experiment. And then the other constraint of data, the amount of traffic coming in, depending on what type of test it is. But the amount of data coming in, if you’ve only got so much. And you can learn the results to one test every week, and it’s going to take you 12 tests versus you take a bigger swing, do it all at once and compare A versus B. And you can learn something about that in two weeks, kind of comes in, but you’ve got to have the minimum detectable effect in mind as well, like you got to have that plausibility to be able to move the needle enough to be able to even know if it’s working. So that’s where that trade off comes in, again, of knowing for your given experiment. Is your impact proportional to the amount of data you’re expecting to access? And that expecting is how much time have you got? Can you run the test for three months? Do you want to? Or do you need to run it for a week or two instead?
Richard Joe 38:49
That’s it’s good advice. And lastly, we talked about the shownotes. We talked about the experiment log knowledge sharing, sharing your work building a piece of organizations. Can you maybe talk about that?
Bryce York 39:02
Yeah, I think an experiment log serves two purposes. One is your memory is fallible, you’re not going to remember all of the details. I was deeply reminded of this, as we were kind of prepping for this this conversation and I was digging through some old old resources and and I have a few items on our to do list of Oh yeah, put together an experiment, design template, and a few things like that, where I’m like, that would be a great resource to be able to talk about and showcase. But I didn’t make it. I didn’t end up building out that more CRO centric side of things because I’d moved into the product side where things are a little bit a little bit different, and where it’s more of these qualitative experiments. If I’d done that I would have had it to reference and being able to share it with everybody. And so similarly, when you run an experiment and you learn things from analyzing that experiment, you Most of the time, a well designed experiment, you don’t just learn a was better than B or B was better than C, you’re actually learning things and developing new hypotheses about why that result was that way. And you want to capture those things, not just for yourself, but for everybody else. And so one of the best ways to drive for to CRO workstream ways of work building a more experiment driven culture, or in product management, just building more experiment and data driven product management, is to show and evangelize, that work to work out loud is one way to put it. So if you capture those resources, and notes and share them around and make them expected learning, it becomes a really great tool for growing the discipline. And if you have more heads in the game, you can get more results out of that, and building an appreciation for that, and start to build that culture that you need in order to have a good experimentation, approach the embracing of failure and being wrong. And so if you can show the outcomes of taking a chance at being wrong, leading to results, it really goes a long way. And it’s one of the best onboarding resources you can have, in my opinion, as a CRO or product manager, let me go through and read the experiment logs from the last three 612 months, the amount that you’ll learn out of that from we tried this and learned this and this was the result is remarkable. In my years of only once come across a company that had anything close to that. So you’ll be in good bits, small company, if you really take a good record of of those learnings. That’s
Richard Joe 41:36
really good advice. It is very easy to just sort of run experiments, then do the thing then sort of move on to the next thing and in not record the actual learnings, which is like, you know, the sort of main takeaway. Look, thanks for coming on the show. How can you explain to our audiences, your journey and to maybe aspiring product managers or product managers who wanted to get to experimentation? It’s really useful. How can people contact? Yeah, how can people contact you, Bryce?
Bryce York 42:07
Yeah, so social wise, LinkedIn is my biggest place. So Bryce York, you look me up on them. Otherwise, Bryce Yocto, calm, I share sort of longer form content and still on the shorter side, so articles and things there. But I would recommend both because I share a lot of shorter, shorter content on LinkedIn. So jump on there, shoot me a DM if you have any questions and look forward to get in contact with your love talking about experimentation. love talking about being data driven, and I really missed the CRO stuff sometimes because that feeling when you run a test and it wins. It’s it’s hard to beat a test that I ran, that was a $2.04 million upside projected over the next 12 months. I still think about that feeling. So yeah, I look forward to living vicariously through you all. So hit me up.
Richard Joe 42:57
Awesome. Thanks for being on the show. Cheers. My pleasure. Thank
Rommil Santiago 43:01
you. This is Rommil Santiago from experiment nation. Every week we share interviews with and conference sessions by our favorite conversion rate optimizers from around the world. So if you liked this video, smash that like button and consider subscribing it helps us a bunch
If you liked this post, sign up for Experiment Nation’s newsletter to receive more great interviews like this, memes, editorials, and conference sessions in your inbox: https://bit.ly/3HOKCTK
Connect with Experimenters from around the world
We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.
Advertisment
Categories
- Adventures in Experimentation (9)
- Analysis (2)
- Announcements (92)
- Ask the Experiment Nation (11)
- Certified CROs (193)
- Community (2)
- Conversation Starters (2)
- CR-No (5)
- CRO Handbook (4)
- CRO Memes (18)
- Experiment Nation: The Conference RELOADED (41)
- Experimentation from around the world (2)
- For LinkedIn (165)
- Frameworks (2)
- Growth (14)
- ICYMI (2)
- Interviews with Experimenters (210)
- Management (1)
- Marketing (11)
- Opinion (8)
- Podcasts with Experimenters (15)
- Point/Counterpoint (1)
- Product Experimentation (5)
- Product Management (9)
- Profile (9)
- Question of the week (5)
- Sponsored by Experiment Nation (1)
- The Craft of Experimentation (1)
- The Cultures of Experimentation (1)
- The weekly buzz (13)
- The year in review (1)
- Uncategorized (352)
- Weekly Newsletter (184)
- Where I Started (2)
- About our new Shorts series: CRO TMI - November 8, 2024
- Navigating App Optimization with Ekaterina (Shpadareva) Gamsriegler - October 18, 2024
- Building Your CRO Brand with Tracy Laranjo - October 11, 2024