The truth about building an Experimentation Culture with Mark Eltsefon
AI-Generated Summary
In this podcast episode, Mark Eltsefon, a senior data scientist and experimentation evangelist, discusses building an experimentation culture in different companies. He shares his journey from starting as a software developer to becoming passionate about data science and experimentation. He contrasts the experimentation culture at TikTok, where data-driven decisions were central and democratization was encouraged, with his current role at an e-commerce company specializing in print-on-demand. He emphasizes the importance of educating stakeholders about data-driven decision-making, and he highlights the need for strong experimentation infrastructure and high-quality hypothesis testing. He believes that while democratization is valuable, it's important to balance it with maintaining the quality of ideas and statistical rigor.
Video
Audio
AI-Generated Transcript
Speaker 1 0:00
Most, you know, they are related or related people, engineering religious people, they just focus on the last part, so on, on statistical math, just you know, how data is collected, etc. And other people, you know, just kind of product managers, wherever it can be, they just focus on political things, not even, you know, on, you know, defining clear goals, clear hypothesis, but on on political purpose. And just to find a balance between then it's really a challenge, and it's where the biggest problem Tracy Laranjo 0:47
Hey, Experiment Nation, it is, Tracy, you maybe recognize me from the audio only version of the podcast. This is, I think, my first video podcast, and I'm so excited to have today's guests on. So yep. Today we have Mark Alphapharm, who is a senior data scientist, and also an experimentation evangelist. And Mark, I've looked through your experiences, and you've been an experimenter at tic toc Speaker 1 1:15
in the past. Yeah, that's right. Tracy Laranjo 1:19
I love it. Our guests are gonna definitely have a lot of thoughts and questions and curiosity about this. And I definitely want to get into that. But thank you so much for joining, it's going to be a real treat. And my first question for you really is how did you fall into experimentation? As a data scientist, Speaker 1 1:39
I think it's worth it to, to discuss how I fell into the data science field. So firstly, so when I graduated from the university, so I'm originally from Russia. So I attended one of the most technical, famous technical universities in Russia. And when I just graduated from it, I didn't know what to do. I mean, data science and machine learning wasn't, you know, on the same level of hype, because it's right now. And I started my career, as I would say, as a software developer. And many people think now just imagine something like C++, some, some complex logic, etc. But it was mostly about Excel, the bay, and all of it. So I just started my career from Excel. And I really believe that it's a great tool. Maybe not now, but it used to be no doubt. And then I came across Andrew Ng, so I believe most of the people knows, know him. So it's a fact that he's a founder of Coursera, professor from Stanford, and many other titles. And I fell in love with his Stanford lectures. And I just started to digest. And I realized that, yeah, so it's my passion, I want really to go further, to find a job to apply the skills to some practice and to see what's happening there. And after, so my revelation was entering GE, after a year, I found my first job in, in the bank, some kind of friend and bank. And I realized, yeah, so many models, so many applications, etc. But how are we going to test it? And that's was the time when I found out about EPI, testing, experimentation. And for the last, maybe five, six years, so I'm in this field. So I would say I'm not focused only on experimentation and that the death thing failed. But it's definitely one of my core responsibilities. So that's how I ended up with experimentation. Tracy Laranjo 4:17
Nice. It's funny how Excel was your gateway drug to experimentation. You just try one thing once and you you don't realize what other skill sets and avenues that opens up. So it's really funny how that kind of led you to where you are now. Speaker 1 4:35
Yeah, I didn't know about bison about machine learning applications. Because I was, so I studied just as far as I remember. Basically language, so I don't remember to be honest. So but it's not a blip ago applicable for some kind of experimentation and data science project. But yeah, you by using my curiosity as an advantage, yeah, so I found out about experimentation machine learning. And yeah, that's how I started my career. I love it. Tracy Laranjo 5:11
So, obviously today you are at gelato. So what is gelato do in terms of product management and experimentation? Speaker 1 5:24
Ah, yes. So just briefly about what is gelato? Yeah. Because everyone, when they just hear gelato, so it's talking about ice cream? Yes, that's true, but not. So I would say it's ecommerce company. And so the industry is pod pod, it's print on demand. So whenever, yeah, whenever you want to order, like, for example, a mug with your face. And if you want especially to, to be produced this locally, so this is your way to do it. So you, you if it's been a season of business declines, so you have to type in optimal print.com. So it's not an ad, it's just, you know, via the company, so, and just, or the mag, just put your face on it. And that's it. So it's a really, you know, a great one. So about experimentation. So our data team is, I would say, relatively small. And we're just in our infancy regarding experimentation, data driven approach. And now we're just building it. So I'm really fascinated with the speech will last at how we just approach, you know, buildings, this just from the ground up. Yeah. And it's really cool. So we've already knew run, but it hasn't sort of evidence. We also apply different causal inference methods, because sometimes, so we have b2b business and b2c business, and sometimes b2b, in terms of clients, just numbers of companies of clients. It's relatively small. So you can't just supply notice common heavy testing methodology and approach. So yeah, we just trying to, you know, to build it. And, yeah, I believe we are kind of successful. So we'll see the example at the end of the year, what we're going to achieve. Tracy Laranjo 7:44
Yeah, nice. Well, you said that you have quite a small team. So I do Absolutely. Want to press a little bit more about that. I know, you're very interested in talking about experimentation culture. I know people love to talk about experimentation culture. But for anyone who's new to the space, it sounds very abstract. And when you're starting to build that experimentation, culture, yourself, it's a big undertaking. So my big, I guess, kind of questions for you are, what did experimentation culture look like when you were at tick tock, as a data scientist? And how does that differ from what experimentation culture looks like today at gelato? Speaker 1 8:33
Yeah, sure. So about briefly about tick tock. So what I knew about tick tock, I haven't I just joined, so I even hadn't used the app before my job interview. Wow. Yeah, because of my perception that it's just new for children. And yet, later, I realized, no, so it's for everyone. And everyone, you know, can find something useful. There. It's really, it's really true. So I genuinely LSP not because I'm a former employee, Tracy Laranjo 9:09
and, like, really awesome. dances like flossing, obviously, Speaker 1 9:14
at least. So when I joined tick tock, I was truly excited to see what was the experimentation culture there. Because it was really a data driven culture where data, it was at the center of the whole decisions, because many people, many companies just to brag about. So yeah, we use data driven decisions. How I didn't know so we just use it and that's it. So it's, you know, like, abstract, but Tiktok really found it. Yeah, so we couldn't implement some kind of new feature, even minor one resolved some data without experimentation, you know, was really cool. Also, it's really great when every member, not only the data team, but I mean of the company can propose the idea, and it can be tested. So it's, you know, I would say, an ideal, it has a perfect experimentation culture when not only you or your team, who's responsible just for one, and a part of Tiktok of another product, can test something but everyone. I mean, it's really great. I would say, eventually, we can end up, you know, users can propose something, and it can be somehow automatically tested. And, yeah, maybe it's, you know, it will never happen. But I think it's, you know, kind of crazy, but you know, what's idea? Tracy Laranjo 11:02
Yeah, I'm sure that there are ways that you can incorporate customer feedback that's submitted through the Tick Tock app, and then kind of use that to inform testing pipelines or product strategy. Yeah, I'd be really interested to kind of see that used in more product functions. Rommil Santiago 11:24
This is Rommil. Santiago from experiment nation. Every week we share interviews with and conference sessions by our favorite conversion rate optimizers from around the world. So if you liked this video, smash that like button and consider subscribing it helps us a bunch. Now back to the episodes. Tracy Laranjo 11:38
But yeah, as it relates to gelato, what does that look like? I'm sure it would be totally different than then experimenting at tick tock. Speaker 1 11:48
Yeah, sure. It'd be good. First of all, because of the magnitude. Yes. I'm not sure how many people use nikto. And billions, I would say. So now, we're not you know, at this at this level, in gelato, maybe in some future, but now and the instance structure of the whole dataset, so the whole pipelines, it's a bit different from tic toc. See? So, yeah. Also, because we are in a really in our infancy, so we have to test what should be better? What, what can work, what can't work? Also, so to think and foremost, I would say to really, you know, build a strong experimentation culture, it should be no, a shared understanding of the importance of data driven decision making. So many people, you know, just belittle this origin downplay guess, I mean, it's all, but no, I mean, it's, it's not obvious for everybody, I mean, for data related people. Yeah, sure. Because I your profession, is, you know, just mingle with data, and just take a lead, and a query, etc. But for designers, product managers, stakeholders, it's not as obvious. So why we can't just, you know, implement this feature to the whole population. And that's it. So because we are, you know, truly believe in it. So it's going to earn us billions of dollars, parents. But we should use so. And that's why we should ask, you know, data related people or just, you know, experimentation, people educate other, you know, members. And that's where we are right now, you know, started our first steps. And, you know, I hope, your Thursday from it. Tracy Laranjo 13:58
So, two themes that I kind of picked up from, what you've said here is that you want to make sure that people understand what data driven decision making even is, and also democratization that you want to make sure that people like everyone feels involved and like they're a part of the process. One thing that I've kind of always wondered about is, is it ever too early to democratize your experimentation program and start getting people in? Are there too many cooks in the kitchen early? Or is it actually great to have people right from the start from other functions coming in and starting to experiment? What do you think about that? Speaker 1 14:44
I believe it's easier to incorporate other people's feedback at least for the beginning not you know, that is the hypothesis etc. But at least it back from the beginning. Because when your system is going to be more and more complex, so it's gonna be harder to implement a new feature or to make your experimentation. It truly democracy, right. So it's gonna be a lot harder, a lot harder a lot harder. So this two points about democratization and, you know, data driven approach, obviously, you know, not the only one, we should have to build, we also need to, I would say, focus on experimentation quality. So, both Yeah, by this, I mean, I would break down it into three parts. So, first of all, it's about infrastructure. So without a current data, we can't, you know, come to some conclusions. So we can, but they're going to be wrong. So that's definitely not we're looking for. Also, business quality. So it's about having a clear hypothesis, do things the right metrics, what we can expect from this new feature, or whatever. Because especially the beginning, when you're just building something, many people just come to you, I want to build this. You just You just asked them, what are going to be the benefit of it? Yeah. So I don't know, just I know, I believe it's going to be really, you know, impactful. So yeah, that's, that shouldn't be, you know, your approach and pipeline. And the second one, it's about experimentation, high quality. So experimentation, quality, it's ensuring statistical methods, avoiding some biases, you know, implementing some health checks, and cetera, et cetera. So it's mostly about statistics, and math, Tracy Laranjo 17:12
etc. Yeah. And if even one of those things is off, then it kind of negates the whole point of experimenting, if your data is not right. What, it's all a waste of time, really, isn't it? And if the quality of ideas are low, then you're probably not going to see much impact. And then if it's not statistically, like, if there's no statistical rigor, then you're just implementing what could be just false, completely false. So Unknown Speaker 17:46
yeah, I would say it's really important. Tracy Laranjo 17:49
Yeah, totally. I, I'm totally with you on that. I think when we think about experimentation, culture, it's maybe easy to start thinking about all of the political aspects of it, but then maybe disregard the functional aspects of the program, or maybe even on the other side, I've struggled with the political side, when I was maybe too focused on the functional side, do you kind of have one particular aspect aspects of your experimentation programs, where you find things are more at risk of falling apart? Or where you see the barriers are more common? Speaker 1 18:33
I would say, it's really hard to find the balance. And it's a challenge. Most, you know, data related or related people, engineering, roads, people, they just focus on the last part, so on, on statistical math, just to know, how data is collected, etc. And other people, you know, just kind of product managers, wherever it can be, they just focus on political things, not even you know, on, you know, defining clear goals, clear hypothesis, but on on political purpose. And just to find a balance between it's really a challenge and it's were in a the biggest problem. Yeah, so, and you should, so it's a very lucrative just to fall apart into some of this, but you have to, you know, keep calm, just to stay focused, and can you know, balance, even just in the life, it's really paramount from my perspective. Tracy Laranjo 19:44
That's, that's a really good word to actually put it is balance. There really are so many different moving pieces of an experimentation program. There is so much of the people aspect to it and there's so much of the like machine I aspect to it. What would you tell someone who's maybe trying to start up their first experimentation program and struggling to balance it all? Speaker 1 20:12
So my advice is gonna be just test it. So the first step is really important. If it's your first, you shouldn't care about, you know, keep balancing, you just should start moving, then you just should, you just shouldn't forget about it, you know, when your car keeps moving, just, you know, don't forget about some rules, you know, speed limits, and other, you know, possible barriers for you. For your movement. To, to go. Yeah, so, I would say just tested. And then don't forget, negative outcome from the experiment is as much worse as positive outcome, maybe even more. Tracy Laranjo 21:07
Yeah, well, yeah, negative outcomes are still learnings. They're still lessons. And that's how you succeed is you have to do it wrong a few times, maybe many times until you finally get it right. So yeah, that's what you just said, right? There is very relevant to just experimenters in general, which is just you have to be okay to get it wrong the first few times, and that's okay. Speaker 1 21:35
So yeah, you can you can apply this rule to every facet of your life. So just don't yeah, don't be afraid to make wrong to, you know, to get some results on. So the most important thing is to get something from it. So if you just doing wrong, infinitely and not learning, it's not the best best. So you just should learn? And then it's okay. Tracy Laranjo 22:06
Yeah, I think that's a really good message to impart on to like anyone who's doing experimentation, not even just the beginners, I think that's a really good message. Now, obviously, there's a balance at play here. But you also I'm sure want to throw speed and velocity into that mix. Once you've kind of gotten your your footing once you're able to get going. And you've started with a few experiments. How do you make sure that you're accelerating your experimentation program, while also balancing things like statistical rigor, and generally bringing results and making people happy? And like, they're, they're contributing towards something? Speaker 1 22:53
I would say, two main approaches to this. So the first one, it's about how you set up your experiment for your experiment, your evidence, so it should be quite straightforward way. So there shouldn't be some wage gaps in it. So it shouldn't be start with I want this feature. I don't know. What what should be tracked, how much money we can generate with and where we want to launch it. Yeah. So you should, you know, just step by step design it, and most, that's all bogus ideas, just won't go further with this approach. And maybe they even can be refined. So if you're going to ask, I know any, any guy who can offer us an idea. So what we're going to achieve, and at the beginning, so they don't know, but then was it with a discussion? With some brainstorming? We can get to it. And that's really great. The second part, it's mostly about mess stuff. So and this mess one, I would also break down but if you think so, the first one, it's about metrics. Yeah. So choosing the right metric. It's really important. But not all the time. So for example, you have a the new SAM, average revenue per user. But it's, you know, various table and it's very hard to capture some differences. So it's better to find Some proxy metrics. And for a vast, vast majority of time, it's not a problem to find it. And that's, I would say, the best method in this, you know, mess approaches, because it's really obvious, you can find it, it. So you don't take into account some complex mathematics, but it really can give you an impact, a huge impact. So other math methods, it's mostly about variance. So if we remember the formula for sample size for a B testing, so one of the criteria is variance. So how we can reduce variance, a few methods like COBIT, like certification. So we can use, you know, whatever you'd like. Also, especially recently, sequential analysis sequential methods, wide popular, and it can help you to avoid and to solve the picking problem. So the picking problem is when you so when you don't keep yourself calm, and always take a look at the results of your experiment, today, in in an hour. So tomorrow, etc. And whenever, you know, you've seen the results that that you want to so when that you have expected to see, yeah, so we can finish it. So this sequential message can help with this problem. And they also, you know, quite powerful, I came across some articles that tell us that it can help us to reduce 30% of the of the time duration, I mean, in average, for sure. Because experiment, you also may google it, and use it. But I would especially the beginning, I would focus on the first part, so about just educate all people about the instrument approach infrastructure, just making sure that everything is correct. And then you will have, you know, time to accelerate your experiments. So don't panic. You, you will have this time. Don't worry. Tracy Laranjo 27:47
It's really funny that you were kind of of the you, you mentioned that sequential testing can actually be better in some situations and faster. I've always been afraid of sequential testing, to be honest. So should sequential testing be left to more experienced experimenters? Or is it possible to do it correctly? Without having to do all this crazy statistical education? Speaker 1 28:18
Hmm, yeah, that's a good question. So I wouldn't tackle this approach, without really understanding what's behind it. Because bad design can give you, you know, a lot of problems. Yeah. And so, it's not worse, to, you know, to hazard a guess, and to try to implement it, just to save our 20 30% of duration time. If it can just be really nice, malevolent, and, you know, just to break down your system, your call even culture, so your experimentation approach. So, I would say it should be, you know, at the end, or if you have some resources, so you can find people, you know, really know about this method, so they can educate you. And then you can just apply the experiment as they experience to your company and to your goals, but just alone in the dark. No, it's not watching. Tracy Laranjo 29:35
Yeah, so don't try this at home is what we're saying for sequential. Okay, great. No, that's, that's, that's really good to know. I've always wondered kind of, if it's possible for me to just kind of get sequential testing right? In one day, not not really knowing how to do it the day prior just knowing that there are Other methods out there and that experimentation is not just a B testing is something that, I think, like very junior experimenters don't don't quite know as well. Speaker 1 30:13
Yeah, yeah. And there are many other approaches. So there are many examples where you just can't apply evidence where you just can't split. You know, for example, when somebody something has been already implemented, how are you going to, you know, how we're gonna split it. So that there is no way to do this? And there are many causal inference methods that can help you to find the impact. Yeah. Or, for example, where you can't, you know, split it, for example, you asked, So you asked the measures the impact from your revision? Yeah, on ice concepts, for example. So you can you know, just to create two universes were in one universe, which was going to be Eurovision and another No. So you have to be? I would say. So you have to dive into some other aspects that are not dark. So don't be afraid. Also quiet, you know, interesting. Cool, sometimes complex? Quite Yeah. So just you knew Google causal inference message, and just, you know, start your adventure there. Tracy Laranjo 31:43
I love it. I'll also add, there is a very awesome deck of cards out there, if you could just Google it like experimentation, flashcards. It's a deck of cards that shows you all these different types of experiments, there's like, like, maybe 50 of them, maybe more, I don't remember. But there's like, the fake door experiment. There's so many just other ways that you can test a big idea without having to depend on the same two or three methods that everybody uses. You can be really creative about it. And maybe, maybe that's part of experimentation. Culture, too, is letting people know that there is no one way to test this. But there are ways that we can't test certain things. And how do we go from there? So yeah, I say, Speaker 1 32:36
yeah, you shouldn't miss, you know, strict about your approaches. Yeah. As you said, you shouldn't be creative. So there is no one right solution. So and you should test it. So and don't be afraid to you know, for example, to reach out to people, for example, on LinkedIn. Yeah. So just to ask something. I also do this. Yeah. Well, Tracy Laranjo 33:03
that's actually a great segue, because I wanted to have our listeners know how they can reach you and kind of stay on top of anything that you're putting out into the world right now. What's the best way to reach you? Speaker 1 33:18
Over LinkedIn, I would say I also have a blog on medium. It's mostly about experimentation. So yeah, and I also reply to every comment, there is right, if there are any, I would say there's two main bridges to me. So LinkedIn, and medium. So I'm going to create the third one. So now I'm in the middle of creating a course about how to build a B testing system from the ground up. Because so about a year ago, I was trying to find out some really great courses about some advanced, evidencing and experimentation practice, and how just to you know, build something from the ground up. But it was impossible. I mean, there are many articles, many videos, you know, many just posts, but it's really difficult just to gather it all together and kind of structure to apply some kind of cycle. So that's what I'm gonna do. Tracy Laranjo 34:42
Yeah, awesome. And it's also normally very expensive to get that kind of training. So I am definitely I love when there's, there's kind of like new people in the space coming out and making content that's going to be a bit more accessible for everyone else. So that's all Speaker 1 34:58
Yes, yes. You It's not gonna be worse. Who wants housing? Yeah, it's got to be accessible. Tracy Laranjo 35:05
Well, we're going to drop the links to your blog and your LinkedIn in the show notes so our listeners can follow you and catch up with everything that you're kind of putting out there. Lastly, is there anyone else that comes to mind that we should interview next? Speaker 1 35:25
So if you have power, I would recommend entering G. Okay, so if, yeah, so it's not, it's not an option. So I would recommend somebody from Microsoft experimentation platform. Because they really publish a lot of useful posts, and they are really great at building it. For example, yesterday, I just ran into the guy Alexander Fannia. From also experimentation platform team. So yeah, I believe he's the right guy. Awesome. Tracy Laranjo 36:08
Thank you so much, Mark. Really appreciate your your wisdom on this this episode. And thank you so much, and thank you to our listeners for catching us on this video episode. I don't know why it's such a big deal to me. I've just never done it before. So I keep bringing it up. But thank you so much, Mark. Thank you to whoever's listening. Thank you. Awesome. Catch you later. Bye
If you liked this post, signup for Experiment Nation's newsletter to receive more great interviews like this, memes, editorials, and conference sessions in your inbox: https://bit.ly/3HOKCTK