Experiment Nation - The Global Home of CROs and Experimenters

View Original

Don’t rely on macro goals with Gursimran (Simar) Gujral

AI-Generated Summary

Simar, CEO and Founder of OptiPhoenix, shared invaluable insights on A/B testing and CRO at Experiment Nation. Here are 5 key takeaways: Don’t rely solely on macro goals: Micro goals reveal user behavior, driving better testing strategies and informing future hypotheses. Iterative testing trumps scrapping: Learn from failed tests by adjusting hypotheses and running new experiments, boosting your win rate. QA is critical but often neglected: Invest time in thorough QA, ensuring your test is accurate and integrated correctly. Outsourcing AB test development can be beneficial: Consider outsourcing for faster scale-up, access to expert resources, and reduced development burden. Prioritize experimentation within your workflow: Make A/B testing an integral part of your product development process, leading to better decisions and faster results.

https://youtu.be/NEMMmLVHjJ4

Audio

AI-Generated Transcript

(00:00) and I always say that reporting is like a storytelling right uh if if if in case uh like you you read a book right you read a book or a children book at a bedtime story right usually it has uh a purpose or a moto right and some learning that you pass onto that onto the kid [Music] right hey everybody it's Richard here

(00:26) from the experiment Nation Podcast today I've got Simar on the line here he's from optu Phoenix he's the CEO and founder of a AB testing web development uh company based in India uh welcome to the podcast Simar hi nice to nice to see you again how are you yeah good thanks iy look early stage conversations

(00:47) about getting on the podcast and uh for some reason it sort of dropped in the pipeline but um here we are uh it's all good to see you again so yeah so like was saying um summer is actually just to introduce him to our audiences he's the was it the co-founder and Co of optis Phoenix which is basically a how would

(01:09) you describe it a white label or early you know offshore development um based um CR experimentation agency based in India although you have you know clients of your of your own maybe you can talk us through you know how you got involved in this crazy world of experimentation Co and go from there yeah why not thanks

(01:31) for the quick introduction Richard um yeah the story goes back to 2012 when we started working as an engineer in vwo which is the famous AB testing tool and at that time I think you could only count the tools on fingers right like the that offered AB testing or service not even services but as a platform

(01:54) right so there was probably optimized Le there was VW and maybe convert.com or or a few others right and at that time there was Google content experiment which allowed you to just split test right and there was nothing that you could do apart from split testing so that is where our journey started back in 2012 where we

(02:14) learned about optimization cro but then soon enough we realized that there was a big gap in the market in terms of how experimentation should be done versus how it was being carried out uh even some of the matur agencies at that time uh when we were at ww right working closely with the these agencies really

(02:35) had I mean wondering that why exactly are they using the tool and the way they are using it right it is not how it is supposed to be used right like setting up the experiments when it was supposed to run only on product detail Pages they're running on sitewide can you imagine like the pollution in the data

(02:54) that they must be looking at right and I'm talking about some of the your agencies here right and and and then at that time realized that okay there are various tools that offer cro right but there was no other C agency especially in India at that time that offered a cost affordable but more essentially an

(03:14) expert service right to really bridge that gap between how it should be done versus how it was being carried out uh and that is how we started optic Phenix back in 2015 when there was like I was enough to to work with the brands like Jabra Canon Domino Subaru Volkswagen pah Hut right throughout these nine years and I've

(03:39) closely seen how they run CR program how some of the mature C agencies do this as well uh and and at that time the mission was simple right how can we get the data faster to these uh agencies and organizations right and give them not just for data and the f Fest possible manner but also give them accurate data right

(04:01) because as I said the problem with the development is not only you need to code the ab test right uh uh it is also about how you're setting up your ab test which is really crucial right uh which involves targeting the Right audience targeting the right Pages integrating it with the Analytics tool which is really

(04:23) really important to really see the data from from top to bottom right so that is that is how we started right to really bridge that Gap and of course development and Engineering has always been um a challenge right in every organization right doesn't matter how big your development team is they always

(04:44) have a big backlog of things right and then there is of course a learning curve right if you want to introduce a AB testing or c as a program you may hire a data analyst you may hire a UI designer but then you may already have an established development team but they have to go from this learning curve of

(05:03) learning the tool and the technology to really set up the test right right and that is where the challenge is right you buy a tool and I've seen agencies and organizations do this that they buy a tool for like 50k a year and run only two ab tests right because they did not have it happens it happens it happens in

(05:24) corporate a like I don't know is it too much too much many not have seen so I don't know what it is but yeah yeah so yeah I think that is that is how our journey started and has evaluated since then um or evoled since then right uh so yeah glad to have you on on on this podcast with experiment Nation so

(05:46) basically like just to recap like you know 2012 like I mean in c r that was like donkey years 10 years ago you're basically saying there wasn't really good there wasn't really good sort of Frameworks for doing C at that time generally speaking that that you could see in the market and people weren really using the

(06:07) testing tools as robust um or or or or rigidly as um so robust as they would in today you know today's sort of cro marketing agency would do compared to like say you know there were Frameworks that yeah I think there were Frameworks that existed back then but of course they were not as mature as they are in

(06:29) today's market right yeah get to if you search for CR and how it should be done you get to see like 10 different courses that are available that would ask you to like hey buy me right and you will learn how to do c but then of course there were there were lesser courses right at that time or people had limited KN how

(06:48) but yeah even the the ones who had the Kno how right uh and and essentially what what these uh courses tell you right these courses are designed in a way to really help you understand the psychology right the buyer psychology online psychology how to use cognitive biases how to do your research what

(07:10) qualitative uh methods to look at right or how to evaluate the results in analytics right so they tell you each and everything which is Rel to research and hypotheses right but I do not know if there are any courses which specifically talk about how to uh develop your test right how to brief your test to the developer right how to

(07:33) really run a controlled experiment right because again you could have a greatest of analytical team right or C resources that invest whole lot of time designing a hypothesis but if that hypothesis is not run in a controlled manner right then that is a waste of time right so I think that is the that was the missing

(07:55) part in the puzzle hi this is r from experiment Nation if you'd like to connect with hundreds of experimenters from around the world consider joining our slack Channel you can find the link in the description now back to the episode right that we identified that okay I think people had the ideas that

(08:09) they want to test but then that they were short on resources or had limited knowhow of how to really run a controlled experiment because uh do not mind but me saying this but majority of the AB testing tools just simply sell this idea of that we are the easiest AB testing tool in the market right be it

(08:29) ww be it AB tasty or any other right so they just simply set you this idea that any marketeer can go in there right and use their Vis Editor to really make the changes right but that is not how it goes right there are a lot of and then then you end up breaking your web your website with using the wizzywig exactly

(08:50) and and and and over the last nine or 10 years we have been into the business there was there is no not even a single experiment that we have done and I I know that i' I've worked very closely with certain agencies who are good friends and and that is a story that they tell me as well right that they

(09:09) haven't used V editor in years right to make any change because that is how not how it works so again um as I said like it is selling the the AB testing tool with this idea of like it is an easiest AB testing tool on earth right versus when buy it right you spend 50k a year and you only end up doing two

(09:31) experiments right so yeah I think that was the Gap that we were trying to solve that at that at that point of time yeah well here you are nine years later um look uh while while we get into I know you wanted to talk about a few topics one was you know the sort of like time spent on doing preest analysis um and then you you know you

(09:54) run the test and then you do post test analysis what what key points do you want to talk about that and what kind of like what do you like what do you see people go wrong there and what do you what have you done to sort of improveing those things as an agency yeah yeah yeah so again as Opti Phoenix we started with

(10:12) the mission of like helping the agencies organizations do Foster development right whereas it naturally sort of inherited by us to really run a c program as well right as an agency so as an agency we do two things where we are doing end to end optimization of cro and also the ab test development right so we

(10:34) offer two different services and what where what I realized is that again talk talking to about the courses right or the Frameworks that are available out there right generally they all talk about how to carry your research and focus a lot on pre-test research how to build your hypothesis right and even

(10:54) I've worked with a lot of C as well and what they do is they're really good at coming up with the hypothesis right and what they're not good at is uh what if that hypothesis failed right to really understand why it failed right because if someone is offering you that you will get a 50% 60% win rate I think they're

(11:14) lying to you right because a good C program right even when you do your research well right you spend a lot of time doing the research probably you are going to get Maybe at best 20 25% win rate right which means that out of 10 experiments that you're doing maybe like seven or eight or failing right but what

(11:36) about those faing I completely agree like yeah sorry to interrupt I I but I do remember um Ronnie kavy saying like you know you should use it Bas and he's you know one of the top dogs ination like he was basically like I mean if he's if he's saying this I take that as matter of fact but like he was saying

(11:57) basically you should expect um a third of your experiments to win a third to fail and a third to be inconclusive um and I for my experience yeah I would use it as a rough fistic for what to expect yeah since you quoted Ronnie I I remember another post from him saying that if you're getting a 10% 12% win

(12:14) rate again that is a false positive right it is very unlikely that you in all in the best scenario that you would get such a big uplift unless there was a ux issue that you just simply solved using an AB right or a bug right that you sold other other than I would say bug yeah yeah other than that it your

(12:33) best best possible bet is like a 4% or 5% uplift with given that you are testing it for enough sample size right so again that is what we have found as well that at best you get 20 25% win rate right uh but again coming back to 70 to 75 80% test which lose right what I've seen is people spend a lot of time

(12:56) a ton sh of time just going through qualitative analysis surveys usability testing looking at G Adobe right but then run a hypothesis which failed it doesn't really mean that you have your you had a bad hypothesis it just means that maybe your execution was bad so it is important that you're looking at the

(13:19) user behavior and what changes in the user Behavior do you see from the data right and stress on that part as as well right doing the post test analysis which is as important as doing the pre-test research right only then you can learn from the data pick up the insights which will actually feed your your next

(13:40) hypothesis so for example size guide is a is a common issue right sizing is a common issue in a fashion e-commerce right space and you often read servey responses and where people tell you that because of size I'm not able to understand what the size is what will fit I'm not able to buy this product

(13:58) right this is what is refraining from me from completing my purchase you run an AB test experiment you do it for three odd weeks you see that okay it was an unconcluded or a negative test right but does that mean that you have solved the problem no you the problem still exists right you just need to rework on your

(14:16) hypothesis and you can only rework on your hypothesis once you actually look at the data for that you have for those three weeks and if you're not looking at that data right it means that you're wasting your time and it is you're not really running a c program but rather you're just following tactics to really

(14:32) solve a bigger problem right which cannot be solved by tactics but a strategy right so you need to be strategic when running the test looking at the data looking at various different micro goals and events that will actually dictate how the test has went right what changes in the user Behavior you have seen and of course we'll

(14:52) dictate the next test hypothesis that you can form out of it just just going back again to the post test analysis let just say like you know you talk about what's that real world example of on Ecom site uh fashion site you know you said sizing is an issue let's just say you ran a test on sizing maybe you can give me an example but

(15:12) let's just say you ran a test let's just say it was either inconclusive or failed would you go back to the hypothesis and and sort of not necessarily say no this hypothesis is wrong but maybe I need to iterate on the hypothesis to another hypothesis and rerun the test in a different way does that make sense

(15:32) because I I think um it's very easy to well I'm I'm a fan of iterative testing but I I'm also know that we've got limited resources as well so there's always those risk those sort of constraints but does that make sense is it is it am am I going along the same don't necessarily scrap the hypothesis but just sort of like think

(15:53) okay maybe the way we totally I don't know maybe we need to adjust the hypothesis or maybe even like the way the test was designed could be altered and then we could just maybe the ux needs to be improved or whatever and then you know maybe run it again I don't know absolutely a number of times I've

(16:11) seen that and I have got a better win rate when I iterate on my losing test right because if you're scrapping your test hypothesis it means that you have wasted all the time you have invested doing the research pre- test because preest research was was the one which actually made you reach to this hypothesis right and if that hypothesis

(16:31) failed and you're scrapping it means you have wasted the time you have taken to test that hypothesis and also you wasted the time you invested to really get to that idea right so again I trative testing usually or from my experience it gives you a better win rate compared to like testing a whole new hypothesis from

(16:50) scratch right uh and it is important that you learn from the data again if you're t Tes in a size guide or a size chart right one thing again a common mistake that I have seen people doing is if a test is running on a product detail page right and not a lot of users would simply scroll to the size guide right

(17:10) only 50 to 60 70% users will scroll to that portion of the page right especially on mobile right people generally swipe through the gallery images Etc and they wouldn't really swipe to the that portion and if you're looking at the whole sample size right it means that you're looking at a polluted sample size that contains all

(17:27) those users as well who did not even notice the change that you made right so it is really important that your ab test is integrated with the with the analytical tool where you can actually evaluate the data in much more depth right and how you would do that is of course by setting up the right events

(17:45) right one is how many users are scrolling to your change how many users are clicking or interacting with the change what is the conversion rate for those users who are interacting with the change and how that has affected the funnel move movement right if a user is interacting with your change what is the

(18:00) add to cut rate for those users versus when this they interact with the change in the original has it increased or has it reduced whether if it has reduced maybe the change that you made had a lot of components to it and that actually confu either confused the users or distracted them so therefore you see a

(18:17) lower add to cart rate right or maybe your add to cart rate was better right maybe you need to do something else to really convert those add to carts into conversions right so again it is it is all about studying the data in depth right and not just looking at the macro goal which is of course for an

(18:35) e-commerce is a transaction right that most people rely on right and that is what what matters to the stakeholders right but then again if you're talking about like creating a culture of experimentation it is not just about getting the wins on those conversions but the learnings that you or the insights that you get from those AB

(18:53) tests that can answer certain other business questions maybe um let's stick way into like while you're there talking about like you know the importance of um you know macro versus um micro goals cuz I know like yeah to a stakeholder they're probably just going to wanting to know like okay forom how many um you

(19:12) know people went to the checkout and actually made a final conversion but you know I don't know about you but like in Australia at least like we don't we're a small country compared to like say the states or yeah many countries in Europe so we can't like we don't have the population to always hit stat Sig for

(19:31) yes final conversion like you know like we might set the primary like you know for me I'm just like for me I'm just um for instance I I might set the primary metric as Okay add to cut and then have sign metric as you know final conversion or or something like that and it might not even reach that like for that

(19:52) secondary metric it might not even meet stat Sig for that but it might stat for um adding to and going to check out so maybe just talk about those sort of traffic implications and also like you know in the context of primary and sorry macro and micro goals yeah absolutely um again I think for stakeholders as you

(20:14) mentioned right what matters to them is how much revenue of left have I got right but then again that is a bigger picture but if you're not looking at the micro goals you're not never going to achieve a lot of it in the m in the macro goal either right because if you're running a test and you're just

(20:31) focusing on macro goal and not the M micro goal so for example if I go back to a sizing test trade and if you run that as a test you see okay the test was inconclusive let me just try out another test and that failed as well right so which means that you're not really caring about how users reacted to the

(20:47) change and you're just probably guessing why it may not have worked right without really looking at yeah why right and what change in the user Behavior did it make right so there are a lot of things a lot of tests that you actually run behind a certain screen right so for example if you're running or optimizing

(21:07) the menu right which is behind the hamburger right so you if you're again loing the entire sample size and not just looking at that specific event for those users who actually clicked on the hamburger menu again you are just simply looking at a polluted sample size right if you're making your change in the

(21:25) fourth fold or third right and you're not caring about how many users actually scroll till the change and how did these users convert right uh if you're making a a test on a login sign up popup right you need to understand that that popup might be present across the site but again how many users are actually seeing

(21:43) that popup or in the case with the mini cart as well or the side cart right which is present sitewide but then how many users are opening that side cut right so I think again micro goals are super important uh understanding the user Behavior because you can only understand the user behavior and connect the dots right uh

(22:03) if you're evaluating those micro goals and I always say that reporting is like a storytelling right uh if if in case like you you read a book right you read a book or a children book at a bedtime story right usually it has a purpose or a moto right and some learning that you pass pass onto that onto the kid right

(22:28) uh and I've realized this recently because I have a one and a half year old toddler right to which I read stories and and there is there is some some Moto right and that is how I connected this that even you're reporting when you're doing it for an experiment is like a story right you're you're actually

(22:48) you're you're telling a story that you saw this because of which there was a change in this particular metric and and since users behaved in this manner we saw this change in the in the bottom line or we did not see the change because this did not work right so you what you have essentially done is you

(23:09) have connected the dots and passed on the learnings from that experiment right so each report should have that Moto right that is what are you passing to the business right what exactly is the is is is what you're passing on to the end client right that is what matter right it is not just like how much

(23:29) improvement have you done on the on the on the revenue or conversion that will eventually follow right if you're doing the experiment yeah can ask a question let's just say like your you know your mecro goal was like you know obviously like you know check out and go and you know um basically form conversion or

(23:49) e-commerce conversion but you had all these different micro goals like um I don't know um scroll depth and um users going through the through the funnel like you know from first to second and third second to third like let's say like you didn't it was inconclusive in terms of reaching the pirate let's say

(24:13) prob metric was e-commerce versions let's say that was inconclusive or didn't reach reach stat Sig but let's just say you saw that there was definitely improvements in the secondary objectives or the micro conversions like scroll depth and users going through stages of the funnel more would you would you make a call in in that case of

(24:37) putting that calling that test in some sense a winner and putting that into production like on on the on the user side even though the the the problem metric wasn't wasn't inclusive a very good question uh Richard and um yeah I'm asking because you're talking about improving the usability and then def definitely and

(24:58) there are scenarios where your micro macro goal might not be 100% or 95% statistically confident and you still go ahead and call that test as a win and they said that is how I see it uh right that if you're if you see uh a statistical confidence and enough sample size for your micro goals wherein you

(25:19) have increased add to basket rate you have some like checkout fun checkouts or more right and and also there is some increase in the transactions as well although that might not be statistically confident because you did not gather that much sample size you have run the test for enough duration or maybe you

(25:38) have run the test for enough duration and still the test are in the results are inconclusive for a macro goal but you still see that on a positive side right that maybe there is a 2% uplift 1.5% uplift and if the change is it takes lesser amount of effort to code right and and we can clearly see that it

(25:57) has improved the user experience right then I would want to call this test that test as a win right and even though maybe we not might not have so again if it is if it is not hurting the conversion and we still see a positive Revenue per visitor Revenue per session or conversion rate along with higher ad

(26:18) to baskets checkouts Scrolls Etc or PDP movement right and that test is a lowcost test right to implement I would still want to go ahead and call that as a win because that has increased certain metrics that that matters and eventually of course not every user is going to convert on that same session right users are going to

(26:40) convert at a later Point as well so again U the short answer is it depends right in the scenario but yes it is not always that you would call a test as a win only if your macro goal is 95% statistically confident yeah that's that's that's what I'm pointing to because like you know like yeah like I'm trying to read

(27:00) between the lines of yes we didn't s for the macro goal but we can clearly see the users are gaaged with the page and they're going for the funnel further and more add to cards and so forth but and yeah like it is a b bit of a judgment call yes you could argue you're probably you could be implenting a a false

(27:21) positive in regards to the to the the prob metric but if you could also arue that we're we're improving the user experience so yeah yeah hey um just in regards to to to to those sort of things you did talk about um I think for the show you wanted to talk about Outsourcing AB test development you know

(27:41) and when it comes to companies that want to uh you know engage with um you can be like yourself what what would you say like what kind of advice would you give them regards to uh when to Outsource when to not Outsource your abt's development yeah so I think when not to Outsource the ab test development I would say that

(28:07) if you already have a well defined processes right and you have enough bandwidth and program is going well I I necessarily do not need to like offsh right because again if it is all working well for you why would we want to disturb that Synergy right uh M but even if you have in some certain cases even

(28:29) if you have a well established processes right but you still want to actually have an extra hand right what you can still do is you can get a resource from offshore who can just amalgamate into your existing processes and team and just work as a part of your team attending your standups doing the way

(28:48) that you work right uh the thing is and what advantage does it give it back to you is you do not necessarily need to hire someone in house right just because you are you maybe last year you were doing five tests a month you are now increasing that velocity to seven rate test experiments per month for which you

(29:07) might need an additional resource so one one one way to go about is you hire someone right then again if you hire someone you need to manage their vacations you need to manage their assets you need to manage their insurances and whatnot right and then of course there is a training period that you need to invest right whereas if you

(29:25) want to like quickly scale right what you can do is of course hire someone from an agency like optifix ecologics or there are maybe couple of other agencies as well right who who offer this as a service right and and and simply get a person who's well trained on these tools and processes right and who has been

(29:45) doing it for years and simply uh have an addition to your team uh so basically it allows you for a quick scale up of how you can actually get get your program up and running in a shorter amount of time right instead of like hiring a new resource training them up and then getting them accustomed to your

(30:06) processes well what what I find is uh yeah working internally in a company there there's all kinds of resource constraints and you know the internal developers or web Masters in a company they're already like they've already got their own work to do and often if it comes down to doing ab tests that's like

(30:26) an another sort of thing that gets added onto like their to-do lists but it's not really a primary part of their job and um you know the I think the advantage of you know us using the agency partner in that sense is being able to leverage off a company that's already got skilled AB test developers they're not just you

(30:46) know normal Engineers like they know how to actually use the platform and they've been doing it for years and they can just go and jump in and do it you know whereas if you're using your own and you're just getting into AB testing they're going to have to read the docs they're going to have to they're

(31:01) probably going to make a lot of to be honest they're probably going to make a lot of eras and QA and and like and stuff like that because that's just how you learn right but um yeah absolutely I think that is the second use case right if you have a well established processes in place you can hire someone who is in

(31:16) addition to your team and they can adhere to the processes that you have already Define that is one use case the other use case is you do not have someone in the team who is equipped with a test development right there might be a good front-end engineer but that doesn't necessarily mean that they know

(31:31) how to uh set up a test right it is not even about coding a test it is about setting up the test right setup involves coding Integrations audience targeting uh setting up goals matrixes Etc right so again Y and the the QA is a huge thing that gets missed I've noticed I think that's one of the would you say

(31:52) would you say QA is one of the biggest things that um why test fails one of the the I this is how I rate it right and we have done some some analysis internally as well because we have run about 10,000 experiments over the last 10 years right and what we have realized is that the number one issue uh

(32:10) is of course maybe your test hypothesis was wrong that is one but the second is of course your your QA was not done accurately right so of course there were bugs in your experiment that actually because of which it failed we have learned our course as well it is not that we haven't over the years uh what

(32:30) we have realized is that 65% of the time if you if you're spending 100 hours doing the development you need to spend 65% of your time doing the QA on an average right and some cases it may be like you need to invest more time doing the QA versus what you're spending on development that is that depends upon

(32:51) certain scenarios or use cases but on an average what we have realized is a sweet spot is about 65% of the time should go into the QA for your experiment to really make sure that your experiment is robustly set up right and QA does not really mean you're just testing functional and visual things right it is

(33:10) also about making sure your test is properly integrated it is running for the right set of audience right set of pages and all your Ghouls and Matrix are tracking correctly because if they are not right even if your test is visually functioning functionally working well and if you're getting a wrong set of

(33:30) data right that is of no use right so either you're getting false positive or false NE well I've been in those situations I've been in that situation I'm like just you know what the hell like what's the yeah I've been in that situation where it wasn't firing that was be to due to not the testing towards

(33:46) well it was due to um I w't go into the reason but there was a technical reason which was affecting the testing tool and I was like what the hell's point of using this tool when it's like completely shown a different conversion rate to what our side it's showing so I was just like okay I'll just use the

(34:02) tool for like the visual experience and then I'll go just go into GA for the post test analysis because I was like this is I can't trust this right now 100% And one of the things that uh that that I would advise all the optimizers QA specialist Engineers to do is to really run your experiments on a cookie

(34:19) or IP address like set the test live and test it right do not just test it on preview mode because preview mode depending upon the tool has certain restrictions on how it behaves right uh it is always best to test your test on a live cookie or IP address or a query parameter right so that you can really

(34:41) see the data flowing reports getting populated correctly visitor count being shown fine right if this and you can you can really test in a lives live environment that is how we usually do it for all our test that we do that's good advice um and look actually we talked about the advantages of Outsourcing

(35:02) development already so look up to 7:30 Australian time so is there anything else um you think our audiences would like to hear from you uh S I believe um yeah the thing that I love the most is um data and just do not simply rely on the data for like preest research right and that is what I want to communicate

(35:27) with this with this uh session today as well that focus a lot on your micro goals and user that will dictate the user Behavior right and and that will help you understand uh why your test hypothesis won or failed so yeah that would be my single piece of advice and of course don't let the development be

(35:46) the bottleneck of your experimentation program right because that is generally the case even with the large organizations as well that AB testing usually takes a backseat wherein it shouldn't because idly it should be the the process should be uh Idea Idea should lead to the design and design should be first AB tested before it goes

(36:08) to the product team and then to the development team so a think the layer that is missing in many organizations is that the there is no experimentation that is done and largely it is because they do not have the development resources to carry out those experiments right so rather than going it directly

(36:26) to the product team and then development team I think it should first go to the engineering to really be AB tested and then I think only the winning test or the learnings from those A test should be implemented advice um let's leave it at that and how how can um how can our audiences uh contact you if they you

(36:45) know want to get to know you more and get in touch yeah absolutely you can reach out to me on LinkedIn or you can reach out to me on email uh which is quite easy that is my first name Simar at optiphoenix.com that would be the best way and happy to take any questions that the audiences may have and happy to

(37:03) see see what they have to say cool thanks Sima um and thanks for being on on the podcast cheers bye hi this is romal Santiago from experiment Nation if you'd like to connect with hundreds of experimenters from around the world consider joining our slack Channel you can find the link in the description now back to the episode

If you liked this post, sign up for Experiment Nation's newsletter to receive more great interviews like this, memes, editorials, and conference sessions in your inbox: https://bit.ly/3HOKCTK