A Conversion Conversation with Chargebee’s Anirhudh Sridharan
Sometimes Experimenters and Conversion Rate Optimizers focus on volume and test velocity?—?and while there is merit to that approach because more Experiments tend to result in more learning?—?we must always ensure our experiments align with business goals. There not much value in high-velocity learning if they don’t answer questions we care about. I recently spoke to Anirhudh about his path to working in Experimentation, his focus on understanding goals, and who he draws inspiration from in this amazing space.
Rommil: Hi Anirhudh, how’s it going? I’m so glad we could chat!
Anirhudh: Hey Rommil, it’s good so far. Hoping for things to get better soon. Really glad to get on this chat as well.
I’d love to hear what you do over at Chargebee and a bit of your career path.
Sure! I just finished college— did my engineering here in India but I was never inclined on getting into that field post-college. I wanted to do something I really liked. and I must say, I didn’t expect that to be marketing until the day I joined Chargebee. I started here around June the last year, and thanks to my mentors, I was thrown into every function within the team and it helped me understand both the product and the process quickly. I was working on product resources, solution pages, a few webinars, created a set of playbooks for chatbots. And after a few months into it, I’ve stepped into Full-funnel Optimization. To see how we can capitalize on website traffic more to get better conversions and similar results downstream. This would broadly involve A/B testing, personalization experiments, and creating conversion-focused landing pages. It’s been amazing so far, never got the Monday Blues!
That seems to be a common trend with folks in this field?—?starting out in one career, falling into Experimentation, and finding it’s a perfect fit.
I’m interested to learn when you develop an Experiment strategy, where do you start? And how do you get stakeholders on-board with your plan?
I’d start with the common goal. To understand why the experiment is needed in the first place. Before knowing what I’m going for, it wouldn’t help to start an experiment, run it, and expect an uplift just because I think it could work. The first step towards that common goal would be building a hypothesis. This comes from historic data and an understanding of what works, both quantitative and qualitative. Building a data-backed case around that hypothesis is crucial. That’d be the convincing factor for other stakeholders. I’d want to know if it ties back to the common goal, something that they are invested in.
Totally agree. Not enough can be said about getting clarity on the goals. It just makes everything so much simpler later on.
So imagine you’re working on a conversion funnel. In that case, where do you typically start? At the top or at the bottom? Or does it depend?
I would say that it depends. You could look at it either way. For me, it comes down to two things – either there is an opportunity or there is a problem. That can be found out at the top, bottom, anywhere in the funnel for that matter. Maybe it could be the high bounce rate for a page, or lesser sales qualified leads, or even lesser conversions from activities like webinars, content downloads, etc., And then I’ll optimize the funnel based on that. And if my experiments at the top/bottom could have a potential impact on my problem or opportunity then I’d start from there.
“…it comes down to two things?—?either there is an opportunity or there is a problem.”
Which companies and practitioners do you consider leaders in the Experimentation space?
I really like the content from CXL Institute, Conversion Rate Experts, Proof, Clearbit, to name a few. I also follow Guillaume “G” Cabane, Peep Laja, and a few others as well. But I don’t think it’s “drag and drop”. Whatever they’ve tried and tested might not get the same kind of results for me. But the way of thinking about the problems/opportunities I mentioned earlier, and coming up with the right strategy can be derived from the materials that these companies offer. And that’s been pretty useful.
A few of those are on my radar too. But you’ve definitely given me a few more resources to check out?—?thanks!
So, let’s say you’ve just had a successful Experiment. How do you respond to someone asking you, “What impact will this have on our annual revenue?”
This is an interesting question. I think it’s hard to quantify it? But it’s not hard to arrive at a ballpark. I think more and more businesses are moving to a pipeline predictability model nowadays, i.e., instead of having a monthly target for the MQLs/SQLs that you’re able to bring in, set a target for the revenue that you’re able to bring in through these leads. And it makes sense to have that approach right from the top of the funnel.
We aren’t fully at that stage, where we can confidently say that our monthly ICP traffic is X, our monthly ICP leads is Y, and we can expect $Z from the Y leads we bring in from X traffic.
But with products like Clearbit, Albacross, etc., that can look up the domain/IP and tell you so much about that company landing on your website, it’s starting to become a reality. And I think we should be able to say the impact on the pipeline for any experiment that we run, soon. Like, the uplift in MQLs/SQLs that you bring in from running an experiment on your target audience should tell you how much revenue you can expect from them. This is something that I’m trying to crack from my end.
I’d argue a lot of places aren’t there. There’s always a fine balance between trying to show potential impact and not making any invalid statements that could set up false expectations. I’d be very interested in a more standardized approach to this, to be honest. So everyone in this industry can speak the same language.
Because people love talking about tools. What are some of your favourites?
I like VWO, it’s been pretty useful for running A/B testing experiments. We’re also trying this platform called Proof Experiences, which is a SaaS B2B personalization product. That’s been quite decent as well.
Can you tell me about your most favourite Experiment thus far?
Our website has two primary CTAs: one lets you signup for a trial account and the other one lets you schedule a call with our team. One day, we were thinking about why we have two CTAs for someone who’s already completed one action (let’s say signed up). And we also arrived at the hypothesis that calls converted better than signups, downstream. So why have the signup button again for these users, when they return to our website hoping to learn more after signing up? And we removed the CTA for these returning visitors and got an improved conversion rate for our scheduled call CTA. I think it’s not just experiments, but also a better experience for our customers.
Definitely, focusing on better experiences for customers usually trumps every other approach.
It’s time for the Lightning Round!
Bayesian or Frequentist?
If you couldn’t work in Experimentation – what would you do?
A singer/an actor maybe?
I didn’t expect that! If you ever make the big screen I’ll be sure to tell people we’ve chatted!
Describe Anirhudh in 5 words or less.
An extroverted, realistic, happy-go-lucky person.
Amazing. Anirhudh, thank you for joining the conversation!
Connect with Experimenters from around the world
We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.
- Making the case for data-backed design featuring Brian Massey - September 30, 2023
- Announcing the 2023 Conference Awards - September 24, 2023
- How to Maximize Hypotheses Testing ft. Eduardo Marconi Pinheiro Lima - September 9, 2023