A Conversion Conversation with Ritual’s Grace Du
I recently had the chance to chat with Grace Du, a full-stack developer at Ritual (if you haven’t heard about it — reach out and I’ll send you a referral code). She’s done everything from data analysis, to search, to machine learning, to experimentation. Today she shares a bit about her experience building an experimentation platform, and her thoughts about how to approach personalization.
Rommil: Grace — so happy to chat with you! I’m excited to pick your brain today!
As companies finally start to realize the power of experimentation for, not only marketing optimization, but also for product development — they debate whether they should build something in-house or go with a 3rd-party vendor. What are the top 3 things that companies should consider when making that decision?
Grace: The first thing to consider is existing infrastructure at the company. An experimentation system can be broken down into several components: experimentation engine, some sort of logging that persist the experiment group assignment, stat engine, and visualization tool. Experimentation engine handles experiment configuration and management, sampling and experiment group/partition assignment. Logging, usually in the form of event logging, or db writes, persists the assignment results. The state engine enriches the assignment data with other operational data, and compute KPI. Visualization tool (Lookers, Tableau, etc) can be used to monitor and analyze experiment outcomes.
Experimentation systems from 3rd-party vendors often offer an out-of-box solution. It provides an integrated solution that contains the majority of the components mentioned above, if not all.
“If experimentation is critical to the business, then it should live on the critical code path.”
Depending on how mature the engineering infrastructure is at a given company, the need for 3rd-party vendor can be very different. On one hand, for startups whose infrastructures are relatively young (i.e. no event logging, no BI tools), third party vendor could be a great choice. It will dramatically reduces the overhead. However, one thing worth considering is that some of the critical components of the experimentation system are also critical to the rest of the system (such as event logging). Investing in third-party solution may save time and resources at the beginning, but it may limit the scalability of the rest of the system as the company grows. On the other hand, for medium size startups who have already built mature infrastructure, it may be difficult to integrate third party solutions due to duplicate functionalities.
The second thing to consider is performance. If experimentation is critical to the business, then it should live on the critical code path. In such case, building in-house could be a better option because engineers can tailor the experimentation system to fit with the rest of the system. This leads to low latency and more flexibility.
“It requires expertise in different domains (engineering, system design, stats), and it usually takes 3–6 engineers more than 6 months to fully launch the system to production.”
The third thing to consider is engineering investment. Building something in-house entails a huge overhead. It requires expertise in different domains (engineering, system design, stats), and it usually takes 3–6 engineers more than 6 months to fully launch the system to production.
Personalization is such a hot topic these days — I’m interested to hear your thoughts on this as I know you’ve done quite a bit of work here. Lots of companies are getting into Machine learning to deliver 1:1 experiences to drive more engagement. Is all this work worth it?
Personalization is definitely one of the biggest trends in the past year or two. I think there are lots of values in personalization as it helps with branding, UX and user engagement. Machine learning is one approach to personalization, but not the only one. Machine learning only works well when there are proper infrastructure set up around it. To name a few, an AI platform is needed for model training and prediction, a continuous deployment process should be setup to iterate through models in production, and an experimentation system is critical for tuning hyperparameters, as well as assessing the performance of different algorithms. Jumping into machine learning without proper infrastructure often leads to a half-working product that only works sometimes. Such product results in more frustrations than engagement.
An alternative to machine learning is a rule-based personalization system. It is less sophisticated than machine learning models, but people understand how it works. Based on my experience, and very surprisingly, rule based systems work for most of the use cases in personalization, and sometime better than the machine learning solution.
“Machine learning is one approach to personalization, but not the only one.”
As companies develop personalization platforms, what approach would you recommend? I’ve always believed that companies should start with basic business rules and evolve from there. What’s your opinion?
I agree with you 100%. Starting with a basic business rules is always a good idea. You may not come up with the best rules, but you will discover important features from the set of rules. Those features can be further used in machine learning models.
I see, that makes sense. In your opinion, what kinds of experiments would you suggest they run to validate everything is working?
A/B testing is a great type of experiments to determine whether a new feature performs better than the old one.
It’s time for the Lightning round! University of Toronto or Waterloo?
Hands-down University of Toronto.
Frequentist or Bayesian?
Pineapple on Pizza. Amazing or awful?
Well, I would eat it, but I would not order it myself.
LOL, I’ll forgive you. Finally, R or Python?
I know some folks who’d strongly disagree — but it’s all good. Thanks for chatting with me today!
You may also like