By Eden Bidani, John Ostrowski, and Rommil Santiago
On today’s episode, John Ostrowski and Eden Bidani talk about how they respond when someone asks them how many experiments they should run and why John keeps a personal list of shame.
The following is an auto-generated transcript of the podcast by https://otter.ai with very light manual editing. It’s mostly correct but listening to the actual podcast would be wildly more understandable.
Rommil Santiago 0:09
From Experiment Nation, I’m Rommil, and this is Adventures in Experimentation. Our panel of CRO professionals leverage their years of experience in UX and conversion copywriting, to field common zero questions. If you are new to the field, or even if you are a veteran, you’ll always learn something new. So welcome, everyone. This episode of Adventures and Experimentation has been brought to you with the support of our friends over at VWO. With me today are John and Eden, and I’ll let them introduce themselves.
John Ostrowski 0:43
Rommil. Thanks for the introduction. So first time doing podcasts like excited to start with experiment, Experiment Nation. So i’m John Ostrovsky, also known as Positive John, Brazilian ex-pat, proud engineer, data nerd, been in a long relationship with growth experiments and normal distributions. At this moment, my full time position is at Brainly. Having fun running experiments to 350 million active users monthly, which is pretty awesome. Always beating up on some growth experiments side projects on the side. Today working with eyetech as well, exploring some different verticals, but boom, check more about all that stuff.
Rommil Santiago 1:32
I just, I just want to point out that you just had 350 million and growth. I’m like, where does that go from? 350 million?
John Ostrowski 1:38
Oh, yeah, I think these are the official numbers, but it’s only going up man education technology these days.
Rommil Santiago 1:46
Very, very impressive. And I’m jealous. Okay, and Eden? What about yourself?
Eden Bidani 1:51
Thanks, Rommil. So my name is Eden. I’m a conversion copywriter. I’m trained in anthropology and sociology. And for the past five years, I’ve been running my own micro agency, Greenlight Copy, helping companies develop smart messaging strategies, and copywriting to support the user journeys and increase their increase the conversion rates across the funnel.
Rommil Santiago 2:12
Very cool. And whereabouts are you located?
Eden Bidani 2:14
I’m in Israel. I’m Australian, but I’m over in Israel.
Rommil Santiago 2:17
So thanks, everyone, for joining. Um, I think it’s a great opportunity to dive into the questions. So for those who are new to the podcast, what we do in Adventures in Experimentation is that we explore and answer questions from the community, questions we’ve seen, as we as we experiment, as we work with, folks, as we work with clients, we get all these questions. And this is an opportunity for us to address those questions. And for those who have their own questions, listening to this podcast, feel free to reach out, send in send in your thoughts and your questions, and we’d be happy to pick them up, answer them on a future episode. So let’s dive right in. So I have a question here from Growth Mentor. Is that Is that how you pronounce it?
John Ostrowski 3:03
Exactly.
Rommil Santiago 3:05
Okay, growth from growth mentor. And it goes, “I’d like to start with experiments, but I’m not sure how many I can run to start with”.
John Ostrowski 3:14
I feel like at least when I hear this kind of questions, are people asking what does it take to start the process to kick off experiments from a team that never really touched that kind of toolset? Edne have you heard anything like that, in your experience on the copywriting side, or even in Growth Mentor? Let’s maybe scope the question in the context that we have around the question before we jump into suggestions.
Eden Bidani 3:48
Yes. So from a question like this, it just hearing it, it makes me it makes me wonder. It’s, it’s, it’s like they’re looking they’re kind of like you said they’re looking they don’t they’re not even sure what kind of tools or where they really need to start? Or what’s going to what kind of tests are going to drive the most kind of the most value? At the end of the day? What are those going to? What are those going to be quick, quick tests, they can start pushing out, they can start, they can start running and get some wins in early on. And as they keep on expanding.
Rommil Santiago 4:20
So it sounds like there’s a couple of areas here where this, this person who’s asked this question, or anyone who’s asking this question is unfamiliar with the tools, unfamiliar with what kind of tests they can run. It does sound like someone very green. So how would you go about introducing someone into experimentation? Who is this new. Where would you start?
John Ostrowski 4:45
Well, let me tell you a quick backstory on that because this this type of question takes me back a little bit. So this was the first question that really got me bogged down about how Applied Statistics applies to experiments. You know, I remember my CEO back in Ladder.io growth agency where I really started getting my hands around conversion optimization. He came to me for a big account a questioning, okay, how can we do experiments for this account? We were trying to sell like a big CRO package? And the question was something like, Okay, how do we start experiments? And how many experiments per month can we run for this account? And I was like, how the heck can I summarize this, this question to a ranger to a number. This is this is scary. I don’t really know how to go about this. So from from that, I remember really studying this topic, specifically for quite some time I got coaching, from Tom Wessling, from Online Dialogue, and trying to understand how can I answer this with a range? Let’s say you can run X amount of experiments per month. But at least from from experience, at this, when facing this type of question, I usually now do a step back. And here, not really reinvent the wheel, I just go, I’m gonna go with what Reforge uses as definition of like what it takes to experiment. I really bring the questions like does this team have the right infrastructure, you know, the technology infrastructure to be able to deploy experiments efficiently. Do they have the time that is needed to get like valid results? And do they have the team knowledge inside their team, to design to execute to analyze and avoid false signals? So usually, when they confirm those set of questions, the discussion goes to a more fundamental, and, you know, back to the basics. discussion, have do you guys experienced this?
Rommil Santiago 7:10
So the way if I understand what you’ve just said, it’s you kind of look at the client or whoever is asking, and you’re trying to figure out what their capabilities are. In terms of, do you have the people you have the tooling? Do you have the time to even run expense? So it’s not even you’re exploring whether that you can even start? Is that? Do I have that right?
John Ostrowski 7:33
Absolutely. You know, because people don’t sometimes don’t really know all the costs involved in running experiments, right? So scoping those three dimensions, usually gives me in the team confidence to proceed, you know, they really understand that okay, infrastructure is there, time availability is there and the knowledge is in within the team. And then from like, once we agreed, we’re on the right stage for running experiments, then what I follow up with is like a bandwidth calculation, where the outcome is usually like, yeah, the outcome is statistically valid answer for like, how many experiments we can run, or year giving the traffic that we have in the different templates, the baseline conversion metric that we’re trying to move, and the alpha and beta values for experiments, but then like, we’re very in the nuts and bolts, like first scoping the, okay, again, infrastructure time knowledge, then I jumped to this more technicality of, you know, what’s the number?
Rommil Santiago 8:46
And Eden, how would you approach this? So we’ve heard from John about looking at the nuts and bolts of it, but how would you approach this?
Eden Bidani 8:54
Yeah, so sorry, really, John, I really appreciate your insight into this from your perspective, but from from the way this question is worded, it really makes me think that this, like you said, this person is extremely green, it sounds like they’re not even really quite sure what they’re getting into in terms of experimentation. This kind of feels to me like someone at C-level has come and said, We need to start doing experiments tell me how many we need to run a month? And what’s it going to cost? What’s the budget? And how much time and how many people do you need on a team? And how much did like that’s, that’s what it feels like, this doesn’t feel like the correct approach to experimentation. This is usually and correct, please correct me if I’m wrong, and it’s not, you know, you don’t start thinking, well, how many experiments can I run a month? usually think, what are we trying to look for? What are we trying to solve with experimentation? What do we try? Like? What are we trying to improve? What are our goals and then start to try and look at that kind of wider picture and start to break down? So what’s the reason what kind of like exactly like you said, John, what kind of tools? Do you have the infrastructure in place to support it? How deep do you want to go into experimentation? How much traffic do you have to support? The expert, you know, this is to support the kind of experiments you want to run, so it’s actually taking just taking a little step back and actually looking at why they want to do this in the first place. Like, what’s their motivation for studying this is? Can they maybe, can they maybe get the kind of insights that they’re trying to in another way? Do they actually have to? To go down this long, like this long process?
Rommil Santiago 10:22
So it does sound like I think both approaches are valid, in the sense that talking about what problems are trying to solve and their goals is laying down the groundwork in terms of context, what is their environment? What is the situation that they’re facing? And the other half of it is okay, so now that we understand where you’re at yours, your situation and and the way you’re trying to achieve? What do you have available? Like, it’s it’s almost like, what you have in some context of understanding their content, understanding their situation? So I feel actually the two answers are very complimentary. I was wondering what you thought about that, John?
John Ostrowski 11:02
I must say, a really like a how Eden brings the figure of the C-level? You know, from a very top down perspective, we need to run experiments. Tell me what’s needed. Yeah. A quick story to share on that. So I was brought into a team in very similar conditions. tell tell us what is needed for us to run experiments in this homepage. Once again, before considering the infrastructure, time, team knowledge, I went through the more technical calculations, okay, let me see how many experiments we can put for, you know, every month, and ended up that we would be able to run like one experiment, if we were very lucky per month. So the problem to be solved? Wasn’t really experiments was traffic first, right? This is how I see like, as a suggested, very complimentary, though the two answers. And that’s the C-level figure from a top down perspective, paints a funny picture.
Eden Bidani 12:05
Yeah, well, that’s, I mean, that’s what we kind of find, in a lot of our jobs a situation sometimes you have higher-ups that have heard, you know, they’ve heard that experimentation is something cool, or it’s, you know, people are talking about it on LinkedIn, people, you know, everywhere else, they think this, this is something everyone should be doing. So they say, yeah, we should be doing this too. Okay. I’m going to get this. Who do I need to talk to in growth? or marketing? Or this? Yep, let’s, let’s start running experiments. What do you need? Like, and it’s not, it’s not such a simple process, like you said, they might not even have enough traffic, they might be able to run one experiment a month, and then it’s like, okay, so you need to maybe reassess. And whether you’re actually going to invest more into this move? Or like, do you want to just start doing one experiment a month? Or is it actually worthwhile investing in it and building out a team or at least, you know, having a couple of dedicated people early on, and then building it out that way?
Rommil Santiago 12:59
I kind of wanted to jump in there and ask. So how do you react? How do people who have asked you these questions? How do they react when you kind of, you know, show them reality? Well, with this traffic and with this resources, this is what you can do? And it’s not the answer they’re looking for? How do you make them okay with that?
John Ostrowski 13:22
I like that. I feel like this is where there’s a lot of coaching involved in the process of experimentation. Because it’s still not a very well explored, field, I don’t know, at least for marketeers. I just don’t want to generalize things. But my usually, the approach that I usually take is okay, like, coming with those numbers, like if you work out the growth model, and you’re able to show like very clearly that traffic is a problem to be prioritized first. And maybe Okay, we can start getting our feet wet with one experiment per month, just so you get that team involved and excited with the process, you know, because there’s also this learning curve of getting excited with a process of experimentation. And getting people used that it’s not all about wins, right? So when you’re able to bring those things in a very clear way with enough process, trained it Okay, experiment is not really the tool for where we are in the growth stage of the company have the product or the now, C-level and management they’re, they’re open to understand and then maybe shift resources, where the more important problem to be solved is, at least that was my experience. But it took me time to develop like those right frameworks and patient patterns to be able to, you know, clarify this and it’s more on the coaching side.
Rommil Santiago 15:02
So before I, I definitely want to hear what Eden has to say about this, but I kind of wanted to make a remark on getting people excited about process. I thought that was a fun. I usually don’t hear that I get what you’re saying. But I’ve never heard anyone get excited over process other than experimenters. So I think that’s, that’s it’s kind of funny. And Eden what’s your take on it? How would you react?
Eden Bidani 15:27
No, yes. So that so that’s exactly right, you have to look at, like Don said, getting them excited about the process getting setting it setting, you have to look at setting up those expectations from early on. Exactly. So what is experimentation? What are we doing here? What are we trying to solve that the point of experimentation is that it’s okay if it’s not a win, it means you’ve still come away, you’ve still learned something, as a result, you can still apply those learnings moving forward, it doesn’t mean discuss something just because there was a winner or fail. You know, it doesn’t mean that it reflects badly on on yourself or on the team or that anything that you did, it just means Okay, great. So we can take this what we’ve learned, and we can apply it moving forward. But yeah, there’s But again, it’s getting though it’s setting up those expectations, getting them aware that there is a process, there are certain requirements that we have to meet along the way in order to make sure that, you know, there’s data integrity and everything at the end of the day. And that, and that does take time. And that does take that coaching just like John said.
Rommil Santiago 16:29
So to jump on that. So you’ve you’ve managed expectations, you’ve kind of showed models, you’ve gotten all these folks on board with process, and now they’re running experiments. So let’s take that as a scenario. And then something’s broke. There’s another question coming. There’s another question coming up. But I kind of thought about, okay, so you’ve just managed this person’s expectations about what should happen and learning is better than than winning, but they are still going to expect wins, obviously. What happens when you break something when you’re breaking experiment in productions? And in production, sorry. How do you how do you manage that situation? I like to start with Eden.
Eden Bidani 17:17
Good question. I don’t, from my personal experience, I haven’t always been involved in the nitty gritty of the experimentation side. But I think what we see happen a lot is that when is when we get those last minute changes. It’s when you start when you start planning, when you start designing the experiment. And then someone Someone comes someone on the periphery, like comes in likely to like we spoke before, it’s a manager to C-level or something come someone kind of from the outside that hasn’t been involved in the planning process from the getting comes in sticks their nose and says, I think A, B, C or D and that changes, that changes some element. And it kind of throws the whole experiment off, because you’ve introduced another variable at a later stage of production when it should have been.
Rommil Santiago 18:07
So are you saying if someone comes into late stages, and something breaks your point at them?
Eden Bidani 18:11
No, no, no, no, no, no, what I’ve just say, but just you know, the point of a point of all, the point No, what the point of all experimentation, you know, you just, you want to reduce, limit as many variables as possible, because the more variables, the more, the harder it is to get to get clearer, you know, to get clear results to have that clear clarity on the result.
John Ostrowski 18:34
To your point of adding last minute changes. You know, no front end engineer likes to work under pressure like that. Experiments usually go broke, when when we have last minutes change. I don’t know about you guys. But just when I heard about the what are the many ways that we can break experiments in production, I just started typing down everything that pops to my mind, because I keep a score on this. Personally, it’s a it’s my personal list of shame. That I always keep it on the side. So I make sure me and my team never committed again. Yeah, I have 4, like, top ones that caused us trouble in the past. And trouble I mean, is you have an experiment going through one week live in production. And then you see that you forgot to publish the Google Tag Manager container with your conversion event. That’s one week of no conversion events being fired. And you just need to go and restart your experiment. That was one of the top ones, it hurts. It’s funny, it’s cringy, it hurts, but it happens. You know, you’re just doing the debugging on Tag Manager in a different workspace you forget to to click the Publish button, especially when different people have the Publish access to the container. Sometimes the person doing QA is not the person who publishes. So this miscommunication gets those things wrong into production and we lose time. It’s nothing crazy on the user experience. But for the experiment validation, it’s it’s purely time last. And other to regard related to QA is, so yeah, forgetting events firing across platforms, especially when you’re dealing with Android and iOS apps or individual platforms, because most of the times, like they’re different engineering teams, right? They’re very specific, like Android engineers, and iOS engineers, the coding language is different. So if you don’t really coordinate, having the same event with the same aliases, firing in both sides, that really complicates things. And it’s on the hands of the QA engineer to support on that front. It’s something that I suffered in the past with. Yeah, I think the last one here, yeah, this, this was a recent one. You’re running a mobile only, or, for example, logged in only experiment, and you forget to set up the targeting on Google Optimize. So you end up showing that to all people. Yeah. So sometimes, depending on the code, that’s not really a problem. But on your experiment side, when once you like, query the data, you’re counting all users instead of users of the only segment, of course, can write, do all the processing and ETL. But your query gets a lot more expensive on the Google Cloud side. So this hurts on that direction?
You actually have, let me get this straight, you actually have a list of every way you’ve broken an experiment. What do you do with this list?
I just told you guys, it’s the list of shame.
Rommil Santiago 22:14
Did you share what I meant was like, do you share this with anyone? Or is it like your personal list of shame?
John Ostrowski 22:20
No, actually? So in yeah, in Brainly this, the second tell about? So my experiment, like my growth team, every three experiments, we have our experiment review meeting, that it’s an open meeting to people that they can participate into a quick slideshow, of Okay, this was the experiment that we run, which variant Do you guys think won, and then people vote, and then we kind of have fun of showing if there was a winner, or it was irrelevant and significant. So in this meeting, we always also review what went wrong. And then we open to the other people of the company to learn from what went wrong. So we have other QA engineers from different teams that passed their eyes through Okay, the setup was wrong, that time, maybe we can come up with a automation for that QA side. So this is why I keep track of the list. It’s just something that we keep constantly reviewing. And it really helps, you know, especially when you have rotation between teams, the platforms that we use, also, the change, right? So you can set up something via Google Optimize, but that particular setup can be via code. It’s just that different engineers operate in different ways. Right? So having this open meeting and reviewing the list of shame, kind of streamlines that understanding across the company. And it’s a it’s an interesting way to advocate at least from my perspective,
Rommil Santiago 23:54
You’re using this list essentially, to add content to a retrospective.
John Ostrowski 24:00
Yeah, with product teams in a knowledge base.
Rommil Santiago 24:04
Yeah, usually when I’ve run, retrospectives, the the you know, what went wrong? What went right for the period, and kind of like that, that’s where it stops. I mean, we have action items to continue and improve later on. But there isn’t this running list. And that’s actually a very interesting concept of it’s kind of like a hopefully, it’s not that long list, but it’s a list of things to check against. That did you make sure that this isn’t happened this is that’s that’s interesting approach.
John Ostrowski 24:30
Yeah. It’s usually the mission of the the QA that I that I say, you know, we our team is moving fast, make sure we break things fast. But then the QA needs to help us to at least find those things we’re breaking even faster into production. So this list actually helps them.
Rommil Santiago 24:49
So in any do this, like you mentioned, you you do this kind of every three experiments and reviewing experimental results I wanted to hear from from Eden around how do you present results? Is there a cadence? Or and how do you ensure everyone’s going to be okay with what the outcomes are.
Eden Bidani 25:11
So and in terms of making sure that everyone’s kind of on board, or at least prepared for what, what the outcomes might, or the outcomes or whatever they end up being, whatever they end up being at the end is, it comes back to setting up those expectations up front, just being having a realistic, I mean, like you said, everyone’s gonna always, we’re always hoping for a win. Everyone’s always hoping for a win, but to be at least being prepared for failure is to accept it, being ready to accept the likelihood of failure is, it goes a long way in helping whether the whether the results no matter what they end up being, if it’s, you know, if there’s a, if there’s a huge, if it’s a huge win, if it’s a small win if it’s, or if we fail, that we fail by how much. But really having having that kind of open, having that open discussion, getting as many people involved as possible, just like John, as John mentioned, is sharing the results around and having an open discussion is really crucial, in help in helping to help you to push those ideas forward. So just what did we learn as a result? How can we apply this moving forward? What are we going to do differently next time? You know, and again, try it, like John said, try to pinpoint all those tiny things that maybe we could have, you know, this is what we should have done better? Or this is what we can try differently.
Rommil Santiago 26:33
I’m curious how many people do you share the results with like, at least with the folks that you’ve you’ve worked with? I assume it’s not an email to the entire company. So how do you decide the scope of who should receive? Or should be included in these conversations?
John Ostrowski 26:50
Very good question. Yeah, I refined this communication process, mostly during Reforge. In the experimentation, deep dive, they have a module only talking about communicating up and down the chain of command, the different values and information that gets out of a growth team. What I do like it’s like very simplified with my team today. We basically have a business summary template that we send to the major team, because the growth team leaves inside a bigger team. Right? It’s kind of a chapter. So there’s this business summary that always contains the link for the more analysis, right, so a spreadsheet that we’re using the nitty gritty of the analysis, and the analysts recommendations, based on that. So if there’s anything weird in the analysis, we usually call the experiment review meeting. If there’s nothing weird, this is where then after, at three experiments, we will have the experiment review, meeting anyway. And this is where we open the communication to a broader public of the company, then it’s kind of an open meeting. So this is how Yeah, this is how we try to advocate the culture of experimentation in a smaller side. So inside of this team, I don’t know if you’re a growth team inside an engagement chapter. So you send this business summary to the entire engagement team, because they can benefit out of that learning in different areas of a product, let’s say. But then the experiment review is this idea of getting fresh eyes and people to maybe challenge the results that you found out of those three past experiments. So this is how we’re trying to do like the up and down chain of command communication. Today, I still have a couple other things to improve specially, like the management and C-level communication, of where experimentation should go towards improving growth of the company in a more like a proactive approach. But we’re still not there yet. still growing.
Rommil Santiago 29:11
Hmm, that’s very cool. And Eden anything to add to that?
Eden Bidani 29:15
I actually I just wanted to ask, so John, what do you what do you feel is like that main barrier to getting kind of C-level or higher ups, kind of, on this on the same page with experimentation?
John Ostrowski 29:28
Hmm. That’s a good question. Yeah, what I usually try my best to do is to understand like what are the metrics that they are looking in their weekly basis, right? So my VP of subscription is looking at the total number of subscribers, right, something along those lines, my director of product is looking at the overall retention of the product. So if you’re able to see from either a qualitative data point or a quantitative analysis that there’s area of improvement of that metric, that it’s concerned that that concerns them, because they have the OKRs attached to that metric. This is where I see experimentation is a very high leverage conversation with that stakeholder. So whenever, again, if I have any idea from the broader team, from internal discussions from other one on ones that touches, okay, this is potentially a good experiment to run towards the total number of subscribers, I’m 100% sure that my next one on one with a VP of subscription. They will listen to it with more intention. But it’s always this exercise of what is the KPI that it’s in their mind? And how experimentation can be a higher leverage for them. It’s just a tool for them. And then this is how I see the communication facilitates a lot. How do you guys feel about that?
Eden Bidani 31:01
Right? Yeah, that makes absolutely sense. Yep, tying it back to back to those key KPIs as in what, what they are looking for. And so you can show how that ties in directly to it. That’s always, that always makes it very powerful. And it makes it very relevant. For them, it’s related to exactly what they’re doing.
John Ostrowski 31:20
Yeah,I feel like for some of the people listening, maybe it’s easier said than done. Because I’m coming like Brainly’s a very data informed company, you know, people talk data very often. So it’s easy for you to tie discussions towards the metric that it’s in a stakeholders mind. But if you don’t have a stakeholder that talks in terms of numbers, we already have a barrier in the communication there. So it’s might be necessary, like a different path of like, Okay, how experiments can be high leverage for someone who doesn’t speak in terms of data, maybe testing, de-risking their, you know, crazy and new, shiny ideas, possibly, when numbers are not very relevant in the discussion. But stakeholders, they all have edgy ideas that they’re willing to experiment with, if you can position that as Okay, so, you know, this whole thing that I do with A/B tests is a way to de-risk putting your idea in production. Maybe we could try that together. I also saw that working when he was a stakeholder that just wasn’t very, you know, familiar with more talking numbers.
Rommil Santiago 32:37
It sounds like, regardless if they’re data driven, or what have you, the the strongest approach, or one of the strongest approaches is to make whatever you’re saying relatable to the stakeholder. So to understand what they care about, hopefully, it’s metrics. If it’s not, you can one day convince them of metrics, but but really, it is about figuring out what do they care about what pressures are they under to deliver, and ensuring that your program speaks to those needs? Would you say that’s fair?
John Ostrowski 33:17
Eden I feel like you can complete this discussion from a very like communication perspective, like meeting the audience where their mind is at. I feel like it’s a it’s a universal rule of thumb, but bring us your specialist thoughts on that.
Eden Bidani 33:33
Thanks. Thanks for that, John. No, absolutely. It’s as as soon as you show. So you know, we’re all passionate about what we do as a living but as soon as, but it’s in terms of crossing that barrier of communication as soon as you can tie what what you’re doing directly into the other person’s goals, the other person’s agenda, the other person, what, you know, what they’re the pains and problems that they’re struggling with. And you can show them, how you’re helping them how you’re supporting them, how you’re lifting them up, how you’re helping push their agenda forward, they’re going to be more, but by framing what you have to share with them, from their point of, you know, relating it to back to their context, they’re going to immediately become much more open to hearing what else you have to say. It’s like you know, it’s you know, you did you do a favor, it’s a, you know, the reciprocal reciprocity, from the beginning, you do something for them, and they’ll be more than willing to do something for you. So the more you position, your findings, your position results, you talk about it in ways that it’s going to benefit them, it’s going to help them move forward with what they want to do. Then it will make them much more powerful, automatically help them be more in tune with with it. Just help them be in tune with listening to what you have what you also have to share beyond that, so they actually become more interested say hey, I can see this as well. my needs. I see this is how it’s relating to it, I see that this is relevant. Maybe they have something else interesting to share. I’m ready and willing to listen.
John Ostrowski 35:08
You see, we we started this topic with a list of the many ways things can break in production out of an experiment. And we’re finishing with tiny bits of Cialdini on reciprocity. That’s a beautiful loop.
Rommil Santiago 35:26
So that’s it. What did we learn today? Well, we learned that when someone comes to you asking about how many experiments to run, we need to get more context context on their tech stack and their bandwidth, as well as their goals and what they’re trying to achieve. And we also learned the value of keeping track of our mistakes. It’s all about learning, isn’t it? And finally, I hope you enjoyed today’s episode. If you haven’t already, and you think we earned it, of course, please consider subscribing. Thank you, and until next time,
(Transcribed by https://otter.ai)
About Experiment Nation
Experiment Nation connects Experimenters from around the world to new ideas – and to each other.
We send a newsletter every week.
Categories
- Adventures in Experimentation (9)
- Analysis (2)
- Announcements (92)
- Ask the Experiment Nation (11)
- Certified CROs (193)
- Community (2)
- Conversation Starters (2)
- CR-No (5)
- CRO Handbook (4)
- CRO Memes (18)
- Experiment Nation: The Conference RELOADED (41)
- Experimentation from around the world (2)
- For LinkedIn (165)
- Frameworks (2)
- Growth (14)
- ICYMI (2)
- Interviews with Experimenters (210)
- Management (1)
- Marketing (11)
- Opinion (8)
- Podcasts with Experimenters (15)
- Point/Counterpoint (1)
- Product Experimentation (5)
- Product Management (9)
- Profile (9)
- Question of the week (5)
- Sponsored by Experiment Nation (1)
- The Craft of Experimentation (1)
- The Cultures of Experimentation (1)
- The weekly buzz (13)
- The year in review (1)
- Uncategorized (352)
- Weekly Newsletter (184)
- Where I Started (2)
Connect with Experimenters from around the world
We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.
- Results for laura.duhommet@gmail.com - January 16, 2023
- Results for laura.duhommet@gmail.com - January 16, 2023
- Results for marcinmleczko@gmail.com - January 14, 2023