Uncovering Solution Bias with Iqbal Ali
AI-Generated Summary
In this podcast episode of Experiment Nation, hosted by Charlotte Bomford, guest Iqbal Ali, an experienced web designer, researcher, and comic book writer, discusses the concept of solution bias and the importance of identifying underlying problems before diving into solutions. Ali emphasizes using AI to help understand problem landscapes, avoiding hastily jumping to solutions, and improving experimentation processes. The conversation covers techniques like premortems and frameworks to minimize potential issues and ensure experiments address real user problems. Ali highlights the collaborative potential of AI tools in maintaining the connection between problem statements and solutions, advocating for a more thoughtful approach to A/B testing and problem-solving in product development.
https://youtu.be/ffR29-Z0M80
AI-Generated Transcript
(00:00) we can't help humans can't help but automatically get drawn to Solutions and so there's a solution that gets formed with everybody and in that solution there's often no connection to the to the underlying problem and uh and then often there's there's a whole lot of baggage that comes with that solution
(00:17) this is i' call the solution [Music] bias welcome to another podcast episode of experiment Nation I'm am your host Charlotte bonford today we're talking to ibal Ali who is a skilled Problem Solver and comic book Enthusiast so ikbal would you like to introduce yourself to our viewers hello so yes uh my name is ikbal
(00:42) so I've been um I've been doing web stuff for the last sort of 20 plus years started off as a designer user research user ux and then also development and then branch into experimentation uh I'm I currently now just help uh product teams uh run experiments and experiment programs I've set up my fair share of
(01:08) experiment programs as well at companies and yeah that that's and in terms of like the comics stuff uh I I write comics on the side as well yeah amazing amazing very artistic it's in your blood isn't it it's in F yeah it's therapy I call it therapy thy yeah it is yeah uh okay so I think the topic today is quite interesting because
(01:32) we always talk about AB testing but we don't usually start with why we do AB testing um so it is kind of like problem solving and weighing which one is the winner based on the data that we're given right and so what we're going to talk about is yeah the the start where where does experiment the very first
(01:56) process of experimentation which which is problem solving or identifying the problems um not really entirely problem solving but the first part is identifying the problem so again like before you solve a certain issue within a website or a product or any type of you know platform that you're using how
(02:15) would you first know that there's a certain issue that needs to be solved yeah good question so it's a it's for me that is the core beginning of an exp program and if it isn't there's a problem there uh so if a hypothesis gets raised oh yeah we want to we want to put in some social proofing or whatever but
(02:37) you know why what what what is it that you're actually trying to solve and in terms of to try and um identify those problems sometimes problems the company itself does know what problems there are the user research team automatically should know if they have the user research team if there isn't any sort of
(03:00) uh knowledge about a problem well there needs to be some sort of audit or some discovery about the problem and even if there is something that the company there's a problem that that the company knows exists um it does need to be validated and in terms of like places to look uh user some sort of user research
(03:22) is is absolutely perfect so user you know like user reviews is is a is a gold M treasure toe Trove of information surveys uh feedback forms and then triangulate them with with analytics data um and then it's important to disassociate what are business problems with actual user problems so if if we focus on the user problems then you know
(03:53) or at least associate a uh user problem or user problems with a business problem then that's that's also another cuz because the other thing is a a business will automatically know its business problems and it will have a um a a set of user problems that they think exists and those need to be validated and uh
(04:18) yeah so that's I think Discovery process speaking to all teams um checking the analysis trying to triangulate some of the uh problem areas is a very good place to start like existing data basically that's very interesting and I agree with you like that's the first stage that we want to start with is like
(04:38) looking because I agree that like most companies generally know what the problem is but again you have to validate those you know those problems whether they really are problems or we feel like as an employee or you know if it's a problem so that's interesting and if it's a business problem as well uh a
(04:56) different yeah yeah yeah yeah it's a different scenario if it's a business problem yeah I agree okay that's very good advice and since you're doing like um consultancy of you know to Pro different product teams do you have like um like when when um someone approaches you and say says like hey ikbal I have
(05:16) all these things I've done all the discovery process and everything what's your like your usual stepbystep process that you would take once you've or how you can help them identify the problems based on those user research and how can you use that as a weapon to help them out so the first thing is to really
(05:37) understand the business and I think that that is a step that is very difficult and very sort of It kind of kills kills the brain a little bit uh trying to understand where the where the business is coming from the context of which the landscape of which you are uh which you are operating the environment in which
(05:58) you're operating um so understanding the the business first is is kind of absolutely critical and then from there you can look into so what was the what was the second question was leading into so how would you use or how would you help them so if in case they already have all this information how would you
(06:17) help them use that information to get to the core of the problem cuz sometimes you know they you have those companies where like oh I have all these user research just amazing information but they don't know how to use it how like how would you help them so lately I've been using AI to help them to do that
(06:38) interesting and the um and the reason for that is because as I found when I when I go into various uh companies and uh when I when I sort of check uh or or uh involved in judging experiments in experiment Awards and stuff like that there's a heck of a lot of solution bias that happens and there's a heck of a lot
(07:02) of you know hey we think we've got this problem and then separately there's a solution uh because we can't help humans can't help but automatically get drawn to Solutions and so there's a solution that gets formed with everybody and in that solution there's often no connection to the to the underlying problem hi this is romal
(07:25) Santiago from experiment ation if you'd like to connect with hundreds of experimenters from around the world consider joining our slack Channel you can find the link in the description now back to the episode and uh and then often there's there's a whole lot of baggage that comes with that solution this is i' call it solution bias um so
(07:41) and what I find is AI does a very good job in terms of taking all of the data that you do have all of the problems and stuff that you do have and the research that you've got and then when you start conversing with it and try to uh extract more information understanding more about the problem landscape and understanding more
(08:09) about the problems that really helps sort of drive a user towards a uh solution much more than if they were doing it without AI because this this kind of connective tissue between the problem and the solution gets maintained and is really strong and AI really helps to to do that I've been doing that uh with with a
(08:36) number of workshops lately and I've been finding um seeing good results with that and yeah I I think that is that is the tool of choice for me in terms of to try and help people understand their problem landscape what does that look like I'm just curious like you got me curious on how how that is it kind of like chat GPT
(08:59) where you're kind of like conversing with AI or how does that look like what does the process look like probably our viewers would be curious too yeah yeah so I've built a a interaction pattern human AI interaction pattern and uh this this specific interaction pattern which is there's a Playbook that I've written
(09:18) with Craig Johan and Marcela Sullivan and um and it what that looks like is basically you you load up all of the data that you do have you prime the AI with all of the information that you have for instance if You' got user review data if you've got some other sort of uh even even if it's like a voice of the customer report uh created
(09:40) by your user research team you load that in and then you start asking questions uh probing uh questions with the AI and in the first phase the first interaction with AI with the data is all about problem exploration and avoiding solutionizing and you just uh and it's a it's a specific interaction pattern
(10:05) designed to probe deep into the problem and then to explore the problem so then to go wide in terms of like oh yeah have we thought about this and this and this and this so using like Frameworks like some like the mcken TSA framework it's based you know all of those kind of principles are B into this interaction
(10:28) pattern so once you've once you've explored that problem properly then you can go into the next phase which is about ideation so at the end of this you create a problem statement or a set of problem statements and then you take those into a separate Playbook or a separate interaction pattern to do with
(10:51) ideation and ideating and the it's an interaction pattern because it's not just prompt AI get the information back that's it job's done it's a kind of back and forth so how do you prompt Ai and then AI gives you output which is a prompt for yourself to then think about certain things and then prompt back so
(11:17) it's like this this interaction yeah yeah that's very interesting wow um and it's actually quite interesting as well like you know again it's not solution Focus you want to start at the very beginning like the problem and not think about Solutions first and then dig deeper from there and Branch out to like
(11:35) oh you know probably there are things that are missing in this equation that we're missing that's why we're not able to you know let's say move the needle or something um yeah yeah because often often times you're um you're absolutely certain that you've got this problem but actually how many tests have you run
(11:55) directly related relateded to that problem and what are the results and are they being tracked against the problem so all of these uh relevant questions that need to be asked and the the entire um experiment setup needs to be um aligned so that you can see all experiments related to a problem and all
(12:20) of the information related to a problem and then that gives you a very good sense of you know what what is the uh what is that problem landscape as I like to call it yeah that's good well I'm about to ask you like what the benefits and like advantages disadvantage but I feel like there's more advantages to
(12:38) what you're doing right now versus a disadvantage but could you like tell the viewers like about the benefit of using your AI interaction tool to help you um more learn more about the problem de deeper yeah it uh so for first thing it avoids solution bias is and it also not only does it avoid solution bias it
(13:01) actually it becomes clear to you that solution bias exists so it makes solution bias apparent as you're as you're using it and then it also diversifies the your way of thinking uh because proper problem exploration like Tusa Frameworks issue trees all of those kind of things they're actually not very
(13:24) easy uh they're very very difficult and there's a lot of friction involved in that which is why um which is why we often skip or kind of race through that step in order to get to to the solution so by using AI it kind of helps us to slow down just a tiny bit in order to explore this problem uh uh much better
(13:45) and then uh avoids the solution biases and stuff like that expands the diversification of it like really understanding the problem it it also it ALS helps you empathize with the users more because you start to understand as you start to understand user problems uh empathy automatically develops and that
(14:08) that's a very strong sort of um strong emotion in order to when you when you're going into solution so uh solutionizing so just understanding uh the the problem just helps build that empathy Bond and then also uh what it does is it brings like if you've got a number of people across number of different teams and
(14:31) stuff like that you'll have various different exper experience levels it kind of brings everyone up to a to so the bar is set higher so everyone's experience level is at a specific level and then when it comes to ideating it helps diversify the ideation as well and then we've definitely seen that in the
(14:53) workshops and also it it just helps kind of organize thoughts and also in workshops as well uh some of the best ideas come from crazy ideas and by having uh by having AI do the crazy ideas you free up the human to not have to be the person to uh to potentially be seen as as the crazy one the the the the
(15:20) one on drugs or whatever uh but so by having the AI uh take that mantle that also just just helps with um uh with with developing ideas as well that's really good I'm just interested because you you know like again solution bias is kind of like emotional like you've mentioned and there would be clients
(15:40) that would come in with this emotion towards their solution bias have you dealt with like a certain you know person or a stakeholder in terms of like how you um problem solve how would you position that when very emotional with their solution bias thinking that they already know what you know what the
(16:02) solution should be yeah and and if there's any resistance to the AI that you're also saying um have you experienced those and how how you would sell that benefit that you just mentioned oh there there's 100% a lot of resistance there's a Hu huge amount of resistance to AI especially uh and especially when it comes to doing this
(16:20) sort of thing so um one one thing that that does like if we do workshops for for one thing and the the these are the the workshop attendees not myself but they do see the workshops as fun um so that's that's that's the first thing so when you have a team get together and you just kind of position it like we're
(16:44) just going to have a bit of fun where you know let's let's play around with AI with this new fangled thing called Ai and um and what's happened uh what I've seen happen is that like like I mentioned before the solution bias becomes apparent to them as they are as they are interacting with AI so there's
(17:06) no I don't need to tell them that they you hey you you have solution bias here uh it it just it just it just become becomes apparent to them and it kind of opens their their their mind a little bit to the diversity diversity of ideas so that that's that's that's one thing the other thing is like uh reluctance to
(17:26) AI uh again which is why like when I when I first did the uh the workshops it was very much positioned as hey let's have a bit of fun sort of uh sort of deal that's a good way to position yeah and and so uh and then because there were um because I remember there's a couple of people who were very resistant
(17:48) to to AI is this was a first interaction with AI and with them you just kind of be patient like okay you can do what you want to but you don't need to go into it too much and that you can just observe how other people are doing it and there's there's there's something that happens in the observation as well that
(18:08) kind of you know that there's a light bulb moment and aha moment that kind of happens uh and then kind of going well it may not be good for me but it may be good for that person that sort of thing so yeah that's good that's amazing I think that um you know AI is also like developing and more people should at
(18:31) least be like because I think this this the most people are are um I wouldn't say most but because I don't have the data to Pro to back it up but there may be people that will resist it because of the notion that it may um take their jobs you know that that's a usual thing but what would you say to those people
(18:53) who are reluctant um with the AI today because I'm pretty sure you you have some opinion about it yeah it's it's kind of um make really making clear that it's a human AI interaction that is taking place so it's not AI it's not you prompting Ai and then it then prompting you back and then that's that's the
(19:21) that's the job because if that was a job then that is very very scary yeah that's true uh but if if you if positioned as a human AI interaction and then the output is is entirely in your control so you control the output you are the boss so it's almost like having uh and this is something that Craig San mentions as his
(19:47) view of looking at AI is almost like an intern so basically so um so if you treat AI like an intern then you know you are in full control you are the bus and you get to decide what what comes out of it and what comes out of it if you once you've seen that uh you your output AI output separately neither of
(20:11) those outputs are particularly good but combined the output is much better so the human is really really vital to that uh to that equation and um and the the the biggest mistake that that people are making at the moment is thinking that hey it's AI let's let's replace entire teams with AI it's it's kind of uh uh
(20:36) collaboration it's collaboration is the key especially at this phase of AI awesome okay now going back to the original topic that we have which is problem solving now you have all these problems that you need to solve and you have let's say prioritized everything because we want to stay with the the the
(20:58) topic of how would you minimize potential problems before during and after I think that's going to be a very long conversation but how would you do you have like a certain checklist to minimize problems that would arise during uh before during and after AB testing so I what I usually try and do is um uh do a premortem so um which is
(21:22) basically uh try to predict the outcome of an experiment which is actually pretty easy because a test of a very basic level test could be a win uh it could be inconclusive or it could be a loss it could be data intera uh data implementation issues or SRM or data Integrity issues so based on those outcomes then work back to say what
(21:49) could have caused those outcomes and then if you if you then work back to say okay so what could have caused these outcomes well I think that maybe because this is a new area of the site that we've never run tests before we've never set up a tape trigger on this this is a manual trigger it forces you to um to
(22:08) explore that that specific that specific space that specific area so and and then once you've done that you can kind of go well what could I do to mitigate the risks or what metrics could I put into place to make sure that we learn from the from the experiment that we're doing are there pilot tests that we could we
(22:31) could do like very simplified versions of those tests or an AA test to to kind of just test the trigger so you start it just gets you thinking about all of those sorts of uh um sorts of things and then that premortem um helps you predict a whole maybe not 100% of all of the issues I could go wrong but even if it predicts
(22:55) like um or you you can catch like 70 80% and then you know just just be aware that there are confounders in everything that we're doing that could impact the outcome of this test and be aware of it and just no plan ahead to say what would we do in those instances that is a for me I've seen like that be a very
(23:18) powerful sort of framework oh interesting because I was just thinking when you were talking about you know like Frameworks well was there any time that you've run a test and it solved a different type of problem instead of the problem that you're targeting it's like oh it's not supposed to be the outcome but that's
(23:40) interesting like was there any time that that happened or yes yes all the time uh so it's it's a it's always uh fact finding uh Mission when you send out an experiment and and yeah and running experiments is not a it it's not a like sequential as as it may seem or appear on first glance so yeah sometimes you'll
(24:06) you'll you you'll get an outcome and kind of go well this one has done nothing for this problem but it seems to be related to a compl another problem another problem so this independent variable that we're changing is not really associated with this problem it could be associated with another problem
(24:24) in which case you know just align those problems yeah that's true that's true um okay well uh do you have certain conclusion or is any last uh thoughts that you want to tell our viewers about um identify identifying problems and solution bias yes yeah it's it's it's the fact that I think a lot of the time people miss out
(24:51) that step because it it's so it takes so much time and resource to do it or there's a deception that it takes time and resource to do it but uh you know find a way to to do it effectively efficiently uh and AI is is happens to be that tool I know I'm blagging on about AI a lot but uh it it solves that
(25:10) problem as well where it kind of like um uh you know really minimizes the the resource issues but yeah I think uh just skipping on that step is a mistake and you know just kind of trying to get get to ideation and ideation ideation ideation you kind of miss the wood for the trees and you kind of you know end
(25:32) up end up in the wrong track uh doing the wrong things and in the end it's it's it's really not going to align to any sort of business goal so I think you know that that exploration that connect that connective tissue between the business problem user problem problem statement solution is is something that
(25:54) that really you need to make sure you you factor that into um into your time yeah I think uh what I remember is like it's kind of like shooting without a Target um so it's kind of like firing a bow and you don't have anything that you're firing on and um yeah so I I get that's a really good analogy yeah yeah
(26:17) yeah it's like there's like you keep on doing things and it's not hitting anything because there is really no problem that has been identified in the first first place yeah how do you know you're you're even progressing where which direction are you moving in after a test yeah exactly exactly awesome so I
(26:34) think this like we we had a really great chat um and thank you so much for sharing that AI information and knowledge ibal we we're so thrilled and uh happy to have you here as a guest at experiment Nation um and yeah um yeah we're just going to say goodbye to our viewers so thanks everyone thanks hi this is Romo Santiago from
(26:59) experiment Nation if you'd like to connect with hundreds of experimenters from around the world consider joining our slack Channel you can find the link in the description now back to the episode
If you liked this post, sign up for Experiment Nation's newsletter to receive more great interviews like this, memes, editorials, and conference sessions in your inbox: https://bit.ly/3HOKCTK