Podcasts with Experimenters
A podcast with Amazon’s Munir Al-Dajani about Experimentation
The following is an auto-generated transcript of the podcast by https://otter.ai with some manual editing.
Rommil: Munir, Munir, Munir. Long time no hear. How are you?
Munir: I’m good man. Yeah, it’s been a while. I think the last time we saw each other was in February […] when the world was a very different place.
Oh geez, feels like, ages and ages ago at this point. That said, you know, welcome to our podcast. This is the first episode of Experiment Nation. So really happy you could join us!
Yeah. Very excited to be here!
Let’s start with you. Tell us a little bit about yourself.
Yeah, for sure. Currently, I’m a software engineer at Amazon, specifically on Amazon Go. Before that I worked as a machine learning engineer and as a data engineer at a bunch of startups in Toronto, most notably Ritual. […]
Yeah, I’v heard of them. lol
I feel like you were [there for] a little bit, a little stint.
Yeah. A very short period of time.
Great. It was a great time. Yeah, so my background is mostly in machine learning and data. And more recently, I’ve done a little bit more hardcore software work
Very cool. So today, you know, I brought you on, because I wanted to talk about something. I want to talk about personalization. And let’s kick it off with how do you define personalization?
Yeah, great question. The way that I see personalization is it’s a way to tailor an experience for users on a platform in order to maximize their engagement with the platform. And it’s usually associated with some business objectives. So a company will decide that we need to, let’s say it’s YouTube for example, we want more people to be clicking on more videos, or we want people to stay on videos for longer. So we’re going to focus, make a concerted effort, at recommending videos to them that we know [will make them] stay on for longer. So they tailor the experience – the home feed, the videos that they see on the home feed, the videos that they see in the related videos menu […] all meant to be catered to that particular user and their interests [based on] their history on the platform, so that they can engage with the platform more meaningfully. And usually, that’s associated with bettering some business metric for the platform.
In terms of recommendation, at least in my case, I imagine it takes a lot of data.
You need data to varying degrees it depends on how you approach the personalization. [Let’s] take an e-commerce platform, something like the Shoppers [Drugmart] website, or whatever it is, […] Instacart, you could predict, kind of, what a person is going to buy, depending on what they have in their cart, without a great ton of data. Like you could just look at what people tend to co-purchase with one another, what items tend to go well with each other. [You] can have rules that look at that, and then decide what the person is going to be buying next. You know, for example, we were coming up on Thanksgiving here in the States, and based just based off of that knowledge, you can kind of infer what a lot of people are going to be looking for when they’re buying groceries. So there are definitely very data intensive methods towards personalization, and the efficacy of using them really depends on the platform, it depends on what the content is that you’re surfacing to the user and […] how you’re looking for the user to engage with it. But I’d say that, particularly on e-commerce platforms, you can get away with doing a lot using just rules.
Yeah, I wanted to dig into that. [There] is always this conversation between using machine learning or AI versus rules based personalization. How do you choose which one to use? Is there benefits to using one over the other? How do you look at it?
That’s definitely a great question and it’s one that every organization has to deal with, every organization that deals with users […] has to deal with at a certain point in time. I’d say that if you have to ask the question, you’re probably going to be good with a rules based system. You know, […] YouTube doesn’t need to ask that question because YouTube has so much data that it’s a non-sequitur. [It’s] just like we need to use machine learning. Spotify has so much data. And […] it’s not just about the volume of data, it’s about the complexity of the data. So let’s take Spotify for example. The data that they’re trying to recommend to you are different songs or different artists or different albums and you can choose to represent that data in a way that’s complicated or a way that’s simple.
So if we take a song, for example, one simple way that you can represent the song is by the genre, by the duration, by the average tempo. You can take this metadata and use it as a way to represent a particular song. And then based on that […], what a user tends to like in terms of genre and what a user tends to like in terms of tempo, based on that, being able to decide what they’re gonna like next. And with that simple representation of a song, you can build a simpler recommendation or personalization system. But if you want to get a level deeper and try to tap into more latent reasons why a particular user or listener might like certain songs then you might choose to represent the song in a more complicated way.
[Popular] ways of doing that are […] audio embeddings, and music embeddings. And embeddings are […] like a very high dimensional vector representation of some entity that’s meant to encapsulate a lot of the miniscule parts that compose it, that create it, and distinguish it, from other entities in the same realm. And if you choose to go with that very abstract, and very, quote, unquote, detailed representation of a song, then you might need to opt for a more complicated system that can leverage that representation to its maximum potential in order to use it for personalization. So it has to do with the volume of data. It has to do with the complexity of the data, and it also has to do with the business objective. Because depending on the business objective, you might be able to get away with very, very simple representations and very simple rules. Like I said, going back to the e-commerce platform, [if] your business objective is to maximize GMV, or maximize the amount of purchases that a user has or maximize the total cart value, if that’s the objective, then […] there are rules that are well studied and learned. [Back] in the 70s, and 80s, there was a lot of research done by MBAs […] where you would go into a grocery store, and […] optimize the grocery store layout. [We] knew that if you put gum and chocolate near the checkout, people were going to naturally pick it up, even though they didn’t come here explicitly for it. So rules like that can really, really get you pretty far. […] But if you go into a platform like YouTube or Spotify, then it becomes a trickier question. And it depends on what you’re looking to optimize.
Would it be fair to say that if you can describe it in an understandable sentence, then it’s a candidate for rules?
I definitely think that that would be an adequate qualifier for deciding whether or not to use rules. Because rules will allow you also to explain your recommendations, […] which has a huge value to a business need. So […] that actually brings up another point, going back to your question, should we use rules? When should we use machine learning to really abstract things. I think that if you have a requirement to be able to explain what your system is doing, [and] there’s a lot of research being done in explainability in machine learning, but it’s still not quite there yet. [Rules] will always be very explainable, you’ll always be able to say, we surfaced this product or this content to this user for this reason, because this will pass the predicate or whatever it is. So if you have a clear requirement […] to explain what your system is doing, then rules […] should be your number one consideration. But yeah, I think […] your sentence was definitely right. If you can explain it in a sentence, or if you have a hypothesis that’s like, I think users tend to like *blank*, then you should go with a rules based system. If you want a more fuzzy approach, like, […] that usually comes with more ambiguous data going back to YouTube versus Spotify, then you wouldn’t, you might need more complicated systems.
…if you have a hypothesis, that’s like, I think users tend to like *blank*, then you should go with a rules based system…
From what I’m understanding is that you could take either a rules based or machine learning approach, but either way you’re trying to maximize utility for someone. Now, because you’re using all this data, especially when you get into collecting a lot of big data, ethics starts to play a role. There was a recent article from Forbes, I’ll show you, I think I shared the link with you. You know, and it says that 17% of consumers feel that personalization is ethical. You know, that means it’s a good chunk of them feel that it’s evil. So what what are your thoughts about using personalization? Is it evil, per se, to essentially dig into past behavior to recommend things?
I mean, let’s start by saying that personalization, in and of itself, I don’t think is inherently evil or not. It depends on how you use it. So let’s take a case study, for example. Let’s look at Netflix. Now, Netflix is one of the largest organizations that employs massive personalization systems and they use them […] to recommend things to you, right? So your home feed, what you see is very catered towards you. [They] use a lot of that data, […] big data […] about you to figure out what should go there, because we know that statistically, whatever gets surfaced on the homepage, directly in front of you, has a higher likelihood of [being clicked] through and […] “converting” on it than if you had to search for it […] buried somewhere deep in the platform.
So Netflix, I think, in 2017, or 2016, published a paper where they estimated that their personalization systems, saved them roughly a billion dollars a year, at that time. Which is a huge chunk of change. [You] might think that, well, [if it] saves them a billion dollars a year, is it because users are more likely to be retained and not unsubscribe because they’re getting personalized content? I mean, sure, probably a percentage of the billion is attributed to that. But the vast majority of it comes from how they decide what content stays on Netflix and what content to get on Netflix. […] Netflix’s largest operational cost is their licensing fees that they have to pay to studios to get content on their platform. And using their personalization systems, […] these are the shows that we think they might like. So if they can do that, for every single user across the platform, they can determine what content most users wouldn’t really care about. And in which case, they wouldn’t go and negotiate with the studio and […] pay a licensing fee and get that content on the platform because they know through their data that their user probably won’t really care that much for it.
So is that an unethical use of personalization? I don’t necessarily think so. It’s just leveraging the data to optimize costs. […] your users are no better or worse for it. You might argue that, well, in that case, maybe low budget indie developments, or low budget, indie films […] shows don’t have an opportunity, or don’t have as much of an opportunity of appearing on Netflix and that’s why it’s unethical. And sure, you could argue that. But then that’s where you have the rise of competing platforms that cater specifically to that niche. So I don’t think that, in that particular use case, it’s unethical to use personalization.
[…] just to jump in there. […] So when you think about Netflix, we can argue, to get on that platform, you have to get vetted and approved. There’s a quality check of some sort with Netflix. Now on YouTube, it’s a free for all. I literally uploaded a video a couple of weeks ago of just nonsense. There was no person to check if it was good. […] if I were to upload misinformation, just in terms of today’s environment, let’s say I uploaded something that wasn’t necessarily 100% true, the algorithms could just spread that around if it seemed like a lot of people are interested in watching that, because obviously, their business ad dollars. So […], I’m wondering what your thoughts are around the the ethics around their model, particularly.
Yeah, so actually, I’m glad that you brought that up because that was literally going to be my counter example. [On] one side of the fence is […] Netflix’s personalization system where the content that is […] there is curated, and so on, versus YouTube, where it’s kind of a free for all […]. So, in those cases, we’re starting with the baseline of saying that presenting misinformation, or aiding in the representation of misinformation is unethical. […] I know that more recently, there’s been a lot more research into fairness in recommendation engines and fairness and recommendation algorithms, but as they have been for most of the time that they’ve existed in production, they don’t have that notion of misinformation. The only notion that they have is [to] maximize click through rate, maximize retention, maximize the amount of time that the users on the video. So with those objectives left completely unconstrained without governance around the quality of the content, then there’s no […] notion of we shouldn’t surface. […]
In which case you could have […] videos that spread hatred, or videos that spread misinformation or all sorts of content. It doesn’t have to be just videos, it could be on Facebook posts. […] we know from human psychological studies, that people will are more likely to engage with content that either enforces their echo chamber or infuriates them. So when you have news outlets, or YouTube channels or whatever that get that, suddenly […] they decide, okay, well, I’m gonna play directly into that. So all of my headlines are going to be sensationalist, all of my content is going to be on the fringes, everything’s going to be super, super polarizing, it’s going to be on one end of the spectrum and nothing is going to be in the middle ground, […] little of the content in the middle ground is going to have a chance to be surfaced because people are engaging mostly with the ends of the spectrum. And the personalization system is trying to optimize for people’s engagement. So the content on the ends of the spectrum gets an unfair advantage. Does that make sense?
…we know from human psychological studies, that people will be more likely to engage with content that either enforces their echo chamber or infuriates them…
I think it does make sense. But what, so then, how do you safeguard against that essentially?
Well, like I said, there’s certain research that’s […] been coming in around fairness and recommendation systems. But ultimately, what it comes down […] is your […] recommendation systems and machine learning in general will only ever work as well as the data that gets fed into it. So it comes down to data governance. It comes down to how do you assess the quality of data? And these are really hard questions to answer, right? Like […] when you look at Facebook and you see that they’re talking about assessing the quality of an article or assessing the quality of news or being able to identify whether something is disinformation or not, in a lot of cases, it’s very hard for a human to decide that much less an algorithm. So it comes down to data governance, it comes down to how do you analyze the data that comes into your system and comes into these algorithms. And how do you regulate it, or manage it. Yeah.
So essentially, I don’t even know how to go about setting up this, this type of governance. Are there examples out there? They’re doing it?
Well? None that I know of.
That’s very encouraging. lol
Yeah, it’s, it’s definitely a very hard problem to solve. I think that the more constrained the type of data that you’re surfacing […] the more control that you can have. But the more free for all it is, […], you know, something like Facebook or something like YouTube, where anybody can upload anything, and […] there’s very little restrictions, it just becomes very, very hard to analyze that. Because you also need objective statements of knowing what’s right, and what’s wrong in certain cases.
[…] So you need us to figure that out before we can instill it in algorithms, you know? Versus platforms like Spotify, or Netflix, going back to those cases where people have control, […] where the platform has control over the content that goes on there completely. So yeah.
Okay. Well, changing gears slightly. So, there, we’ve dropped some words, during our conversations. Such as machine learning, artificial intelligence… I always find this conversation kind of interesting, or this debate. What do you, in your view, what is the difference between machine learning and artificial intelligence? I get different answers from different people.
Yeah, I’m sure I’m gonna be giving you yet another answer to add to that roster. But let’s, […] try to delve into that. Let’s try to look at humans. Just let’s start […] with our first principles, like our number one example of intelligence, which is us. Because I think before we [can] even begin a conversation of what is artificial intelligence, we should have some semblance of an understanding of what is intelligence, for quantifying it. So. So this can be […] the most high level understanding of what intelligence is. It’s that you have these quote unquote, rational agents, or they might be irrational agents. They’re just agents in the world that have a way of perceiving information from the world around them and they’re goal driven – whether the goals are very basic survival goals, like finding shelter, finding food, finding safety, versus very high level goals, like I want to take this career path, and I want to do this, and I want to do that, I want to start my own company. So they’re goal driven in some way or another. And they have ways of perceiving information around the world. And then they enact actions upon the world in order to achieve their goals. And if this constant feedback loop of: commit action, get response, see how it aligns with the goals, commit new action, and so on, and so forth. [Y]ou can describe that as intelligence.
Now, if that’s your description of intelligence, then artificial intelligence is having a machine do something similar. Now, the first question that can come out of that is, well, what are the goals of the machine? It’s whatever we would program those goals to be. […] If we can make the world very simple. […] say that the world is comprised of users and videos and actions in the world. Are users watching videos? […] that’s a simple, simple, simple world. In that case, an artificial intelligence could be an algorithm that’s optimized [so that] its goal is to maximize how many users click on videos, or maximize the number of users that click on a particular video.
Right, but isn’t it? Isn’t it machine learning?
I’m gonna get to that in a second.
We haven’t talked about machine learning at all, this is all just the idea of intelligence.
Got it. Yeah.
[…] Fundamentally, machine learning is all about just understanding patterns. That’s it, […] let’s take it at the most basic level, every machine learning algorithm that you will encounter. […] there are three families of machine learning algorithms: […] supervised, unsupervised, and reinforcement. But across the board, all they do is understand patterns.
So in supervised machine learning, you have a set of data and a set of labels or categories for that data. And you’re trying to quote unquote, learn the mapping from a particular kind of data to its label. So if you have you have images of cats and dogs, and you know which images are cats, [and] which images are dogs, you’re trying to build an algorithm that understands when an image is fed into it, whether it’s a cat, or whether it’s a dog. That’s […] supervised.
Unsupervised learning, you have different groups of input data, and you don’t know what their labels are. So you might have images of cats and dogs, but you don’t know that they’re images of cats and dogs and the objective is to fundamentally find things that separate these images being like, […], this image falls into this group, because it has characteristics and properties that are most similar to this group. And this image fundamentally falls in a different group, because it does not adhere to the characteristics of that group. But it adheres to the characteristics of this group. You can kind of analogize that, you know, as humans. As you might not be someone who knows art, for example, like fine art, you might not know the difference between the impressionist period, or the Romantic period or the Baroque period. But if someone puts a set of Baroque paintings in front of you, from Rembrandt and other Baroque painters, and a set of Impressionist paintings in front of you from Van Gogh, and other impressionist painters, you might not be able to say this is Rembrandt and that’s Van Gogh. But you can tell that these are fundamentally different kinds of artistic styles. And so that’s the idea behind unsupervised learning is understanding the latent patterns that distinguish different groups of data.
And with reinforcement learning, […] it’s the most similar to the definition of artificial intelligence that I gave, which is you have […] agent in a world and the agent can be in different states in that world. […] the agent can commit different actions in the world that modify or change its state. [A]nd there’s some goal in that world. So let’s say that your objective in the world is the game of chess, your state is the state of your board. And let’s say that you’re playing the white side of the board. The state is what this position of all the white pieces are on the chessboard. And every action is every potential thing that you can do, like move the knight, move the rook, move the queen, whatever. And your goal ultimately is after however many moves in this game, is to win. And so in reinforcement learning, it’s learning that Cartesian product are like the combination of state action pairs, and understanding what yields the most long term reward, which pot which patterns of state action pairs yield the most long term reward, where long term reward is measured by ultimately, whether you win or whether you lose.
But again, […] these are the high level three different families of machine learning algorithms. And the commonality across the board is that they’re pattern recognition. So let’s go back to the first question you asked. What’s the difference between machine learning and artificial intelligence. Depending on how you choose defining each, they can either be very similar or they can be very different. I, the way I understand it is machine learning is pattern recognition. And pattern recognition is a necessary subset of intelligence. You need to be able to understand patterns in order to be able to, […] have intelligence. But […] you need also a lot more things in order to be able to […] achieve […] artificial general intelligence on top of patterns. There are people who disagree with me. There’s different schools of thought within the research community. But another way that you can kind of think about it is, again, let’s go back to first principles, we are the best example we have of intelligence. Going back to what I described as intelligence, which is us perceiving information about the world around us, and then committing actions in order to further our goals. Machine learning has historically or more recently done very, very well in tasks related to understanding the information around us. So image recognition, speech recognition, these are sensory, ultimately, these are these are first principles, these are ways of receiving information. [Y]ou might choose to define them as intelligent components in and of themselves. But they alone do not comprise intelligence in its totality. And, yeah, so machine learning [has] has done very well in terms of allowing us […] to find an automated way of perceiving information about the world around us. Or at least proxying it in the way that we do it as humans. But it hasn’t gotten much further beyond that. Yeah.
So okay, so you’re, you’re saying that machine learning recognizes things, and artificial intelligence is a way to achieve, I guess, in layman’s terms, to achieve the thing. And to achieve that thing, you need to be able to recognize things through machine learning. Now, what, let’s say in Spotify, you can use machine learning to identify a pattern between a bunch of songs, whatever. But then my artificial intelligence hat goes on and says, well recommend that to Joe. And isn’t that like, it almost feels like machine learning does a lot of the hard stuff and artificial intelligence could be very simple.
You can look at it that way. But let me…
But I would be wrong. LOL
Actually, you’re completely right. Because in the world of Spotify, remember again that let’s go back […] to the first description of intelligence which is you have a world, you have actions, you have states, you have goals, and your […] objective is to commit actions that maximize your goals. So in the world of Spotify, what you’ve described is completely right, because the heavy lifting is done by understanding the information. Once you understand the information, the goal can be achieved very easily. But that’s not true across all worlds. Let’s take the world of self driving cars for example.
Okay, that’s true.
Self driving cars in that world, you need to perceive so many things through LIDAR sensors through computer vision through cameras. And these are all different ways of perceiving the information around the car. But then afterwards, that all needs to be fed into a general intelligence system that decides it’s safe for you to switch lanes now, or it’s safe for you to take a right turn.
And that general tell that that AI is much more complicated than Spotify is recommend this to Joe, then.
Got it. Got it. So yeah, all this sounds very complicated. So clearly, they’ll hire a lot of folks as smart as you to to sort out a lot of these algorithms and work on all these models. But is it worth it? You know, let’s go. Let’s go all the way back to personalization.
And leveraging ML/AI, what have you. Is it worth it?
That’s definitely a great question.
And okay, we’re done. LOL
Yeah, outstanding! Um, no, […] there’s no universal answer to that. It comes back to the beginning, when we were talking about when should you use AI systems for personalization versus rules based systems. Is it worth it? I would point you to Google’s ad revenue per year. So in that case, 100%, it’s worth it because their ad tech system, you better believe is 100% powered by these kinds of personalization systems, because fundamentally, that’s […] their revenue model, right? [T]hey’re an ad surfacing platform, that’s their thing. So if they can guarantee or they can offer to people who are trying to advertise something, that we’re going to present it to the users who are most likely to be engaged to engage with it, then that makes them a very, very powerful ad platform. And they make a lot of money that way. And the way in which they’re able to […] surface this to the users who are most likely to engage with it is because they have machine learning systems that allow them to understand their users, and what they’re likely to engage with, across multiple domains. Because Google has YouTube, which is real content, they have Google podcasts, which is podcast content, they have Google music, which is music content.So they can have a very, very sophisticated, holistic understanding of what their user is and what they like, in the same way that Facebook [can]. That’s another great example, in the same way that Facebook does. […]
I like how you completely ignore, you know, maybe Amazon’s doing it, too.
I mean, Amazon certainly is doing it, but Amazon, […], fundamentally is an e-commerce platform. So […] strip away all the personalization, it’s still someplace that you can go to buy stuff. But Facebook, as far as a business model is concerned, without ads, they don’t have very much going on in terms of business model in terms of revenue, right? And I mean, I, you don’t need to take my word for it, you can just pull up their most recent quarterly statement. […] strip away all the personalization out of out of Amazon, you still have an e-commerce platform where sellers can go to sell their things where buyers can go to buy their things. But strip that out of Facebook, […] you have no way to make money.
This is a podcast on experimentation. So you know, I have to ask some experimentation questions. So you have a lot of these models, ML, AI, personalization, you know, it’s a, it solves complex problems, simple problems, etc. How do you experiment on this? Like how, when you create a model, how do you write an experiment to understand if this is working? Is it possible to run experiments? Or is it just like you write it up and push it to production? And you’re and you’re done? Obviously, I’m hoping that’s not the right answer. Because then why are we talking? But yeah, I’d like to hear your perspective on that.
[…] like any other product change, or feature change, or whatever it does entail it does require experimentation. The type of experimentation that you would […] employ depends on the granularity with which it changes the end experience to the user. […] what I mean by that is, let’s suppose that you have a platform, and you have a homepage on that platform, and that homepage surfaces content that users are most likely to engage with. If you are going from a rules based system to a collaborative filtering algorithm recommendation system, you’re changing the way that the content gets surfaced on that homepage. One, it used to be a rules based system. Now it’s a collaborative filtering algorithm. Then […] of type of experimentation that you would do could be as simple as A/B/n testing. Because at that point, you have such a wild change in how fundamentally the content gets surfaced, that you don’t need a very sensitive statistical model to be able to evaluate the differences between your control group and your treatment group. But if you’re changing it from one collaborative filtering model to another, then you might need […] more Bayesian methods, or something like that, in order to […] quantify the more minute differences between the two ways in which the content gets surfaced. I should also have prefaced all this with saying that the general agreed upon framework for evaluating recommendation systems is […] called the counterfactual evaluation framework. And if you Google that, you can find the lecture series from Cornell where there are case studies from Microsoft, Google and different big companies and how they employ counterfactual evaluation. But ultimately, the question that that aims to quantify is, […] what would the user have done had we not surfaced this content? Or had we surfaced it in the control [one] way or in the [variant] in a different way? So underneath that framework, everything kind of universally falls underneath that framework. And then the specific tests that you employ depends on the delta, the magnitude of the delta between the approaches that you’re taking to recommend things
In a scenario where, let’s say you have a situation where your population interacts with each other? How can you A/B test or A/B/n, or what have you, in that situation? Where the users who see the different experiences, can interact? Is that a concern in this world? Because if you are sitting beside someone else, and they see a different thing, and they talk to each other? Doesn’t that impact how you’re gonna evaluate your model.
To a certain degree, but […] if their interaction with one another, ultimately changes the polarity of the metric that you’re trying to improve through this system, then yes, you would have it in consideration. But you would only have it in consideration and so far as how it’s represented in the data. So let me explain that. Let’s start from the beginning, […] the reason why you build any of these systems is because you have some business metric that you want to improve […]. Now, let’s suppose that I have two users A and B. And let’s say that the platform is LinkedIn. Because LinkedIn is a very good example of this. On LinkedIn, depending on when you when you click the homepage, you get a different set of content ranked on the homepage, the ranking model gives you different things each time. And that’s very deliberate. It’s because if you see the same thing, every time that you go on the homepage, you’re less likely to go on the homepage super often in a given day. But if you see something new every time, then you’re more likely to be there. Now, let’s say that if A and B are sitting next to each other, and A sees different content than B does, and A clues B in on that, and suddenly that causes B to change their behavior, their behavior that now gets fed into the system. The second day that we train the model, it’s going to account for […] the influence that they had on it. So when the model gets retrained […] the next day, it’ll account for B’s change. So the way in which that gets accounted for is the way that it manifests itself in the data the next time that we train.
So the more this happens, the more […] it learns that there’s going to be a percentage or group of folks that interact. And it just builds that into itself. its own model.
Exactly. And that’s, that’s why […] with a lot of these personalization, recommendation systems, one of the biggest pieces of machine learning infrastructure that are necessary to have, are […] things that measure data drift, model drift and concept drift. So what that describes is, how much is […] the data today different from the data yesterday. Because if you’re not constantly adapting the model to the changes in user behavior that are happening, then your your business metric might not continue to move in the trajectory that you want it to improve. So you need to always be measuring how different is the data today than it was yesterday. How different is my model responding today than it responded yesterday? If my model trained today responded the way it does today, yesterday, how would that have changed my evaluation metrics yesterday, in order to know if historically backfilling on this model would would increase performance? You see what I mean?
Yeah, yeah. That’s really fascinating. Clearly, you’ve done a lot of this work in your previous roles. And all that was in house. Right? Yeah. So there’s a lot of vendors out there that are pitching, you know, machine learning capabilities, AI, AI capabilities, it’s always one of these selling features. And I’m personally always skeptical of that, but let’s put that aside. Like, would you recommend folks who are new to this? Let’s say they’re beyond the rules based approach? Um, would you suggest that they build it in house or use a vendor?
It depends on what it is. So…
You never give me a straight answer. LOL
No, no, I, I came into this chat planning not to give you any straight answers. But yeah, I think that it really depends on what it is, and what the company and organization that’s asking this question, […] what their product is, and what they’re looking to consider and what their size are, what their sizes and how much headcount they have. But at the every day that I’m online, and I’m seeing what’s changing in the realm of personalization, and machine learning and enterprise level machine learning, there are way more and more products that allow you to self service these things. Like when I when I first started out, it was relatively mature, there was a good amount of products that allow you to self service, but it doesn’t compare to what it is now. At this point, like you could, you know, you could go on any of the cloud providing services, Google Cloud, GCP, Google Cloud Platform, AWS, Azure, any of the major cloud platforms will have some baked in auto-machine learning or at least hosting infrastructure in a training infrastructure. So you no longer have to worry about setting all of that up yourself in house. It’s just self service, its platform as a service. The more ubiquitous that these become, and the more easy that they already use, which they are becoming increasingly easy to use, the less the argument for getting a third party vendor becomes valid.
Where it might be valid is if you can’t afford the headcount for people who would actually work on this internally, because you need this needs maintenance. Some of the things that I described like measuring constantly measuring concept of concept drift, data, drift, model drift, these need to constantly be evaluated. And as soon as you see those drifts, you need to retrain your models. Or sometimes you might need to fundamentally change how the model makes its predictions. You need people to do that. So if you can afford the headcount for [… a few machine learning engineers to do that, then I think that that probably is a better option. But again, there are like you said, those third party vendors that offer it as a service. And that’s a good choice, too. But I find that with these models, they need to be adaptable. So you need someone to always be on top of them and always on top of what’s going on, and making changes as necessary.
Yeah, so we’ve been chatting a while now. I think it’s time to change gears. I’ve actually never done this in person. So you’ll be the first it’s, it’s time for the lightning round.
Nice. It actually will be a lightning round. There’s no like thinking beforehand.
There’s no thinking you just whatever comes to your mind. So real quick. frequentist or Bayesian?
Yeah. Well, I should have picked up on that from your past comments. Machine learning or business rules?
Okay, but I really didn’t expect that.
Oh, we can delve into those after the lightning round if you’d like.
I can. I can cut that out if you want. But yeah. Okay. Very interesting. Yeah. R or Python?
If you couldn’t be in, basically machine learning, data science. What would you be doing today?
Probably operations research.
Operations Research. I don’t even know what that is. And finally, describe Munir in five words or less.
I would go with, “all over the place”
All over the place. All over the place, man. I’m doing things all over the place.
That’s a…. perfect.
I’m gonna go that.
Knowing you. Yes, that’s, that’s not bad. Cool. Um, that’s it. Thank you for coming on to speak on our first podcast. It was a pleasure talking to you. I learned a lot. And I don’t know how to outro this. So, yeah. I’ll figure that out in post, I guess.
Yeah you can do that. Thanks for having me.
(Transcribed by https://otter.ai)
You may also like