A Conversion Conversation with Kaspersky’s Sumantha Shankaranarayana
One could have the most powerful Experimentation engine, the greatest analytical minds, and the best engineering support in the world?—?but it would all be useless without the ability to effectively communicate the value of Experimentation to skeptics and newcomers. I recently spoke to Sumantha about his perspective on the importance of communication, demonstrating value, and what it takes to launch 40 tests in 4 weeks.
Rommli: Hi Sumantha, how are you? Thank you for chatting with me today!
Sumantha: Hi Rommil, I’ve been doing great. Thank you for asking. I hope you’re doing fantastic as well and so glad to be part of your passion project at Experiment Nation!
How about we start with hearing about your journey to where you are and what you do today?
Yes, for sure. I have recently joined Kaspersky in March 2020. Given the worldwide havoc of Covid-19, it was quite a time to switch gears from consulting to working for one of the biggest privately held cybersecurity firms in the world as a Senior Conversion Rate Optimization Manager. I am entrusted with the global responsibility of creating landing pages with frameworks and growth models that apply to dozens of markets in all continents.
Before this, I have been Vice President and Senior Consultant to Indiginus.net and consulted for some of the brightest companies like Stanford Graduate School of Business, BharatMatrimony.com, and Alliance Real Estate to name a few.
Going further down my career, I’ve held positions like Head of Conversion Optimization for a french startup called as Namek.io; Head of Optimization and Experimentation for Desk Nine Pvt Ltd that owns LegalDesk.com, SignDesk.com and Melento.com; Head of Optimization and Experimentation for a US firm called Neoticks LLC that owns FileUnemployment.org, HomeWarrantyReviews.com and many other sites.
You’ve been in this field for a few years now. What keeps you in Experimentation?
At my first job, though I bagged the role of a Web Developer, due to business needs I took up marketing in general and conversion optimization in specific for our US business. I was fascinated by data analysis and had a natural flair for writing persuasive copy.
At the time, I had to master this new marketing field and keep up with my now less than 5% engineering activities. I was up against two public companies in the US and they had workforces of over 5000 employees, whereas, I was alone with some external engineering help. This is what made me battle-hardened: a daily routine of your day job from 9 AM to 6 PM and then from 7 PM to 2 AM of self-learning so that you bring massive value to the table at work the next day. We did win big this way and were able to gain substantial ground.
The joy that I continue to get from being on an infinite learning curve is what keeps me going every single day.
What are some of the most interesting changes you’ve seen so far? And what continues to be a challenge?
Personally speaking, I’ve had this deep-rooted sense of what marketing is and keep questioning and improving.
It is NOT finding somebody and throwing them down the funnel and calling that acquisition or yelling at the top of the voice to BUY BUY BUY on the website and call that a conversion!
The interesting change that I see is that more marketers are doing the righteous stuff, shedding in-your-face tactics and helping their consumers with content based on their intent. There’s this rise in conversational marketing that I love so much.
On the contrary, the challenge remains in educating stakeholders of the surprising power of online experiments and unlocking budgets to pursue online experimentation. People hardly understand that even from a failed test, there is so much learning and savings made on the website.
“…the challenge remains in educating stakeholders of the surprising power of online experiments and unlocking budgets to pursue online experimentation.”
In your opinion, what is the best way to measure the performance of an Experimentation Program?
There’s no simple answer to this and depends on the maturity of the organization. The end goal is to have a center of excellence when it comes to experimentation and that certain KPI’s that the entire team goes after makes an impact, down the line.
For example, if a team heavily focuses on reducing CAC and not pay attention to the LTV, then that’s a failure in my lens. It is not only about acquiring 5 customers for a reduced advertising spend, the challenge is acquiring those 5 customers for more than 5 years and beyond that. To win at that, you have to keep creating ‘Aha’ moments.
How do you demonstrate the power of experimentation to skeptics? How do you attribute business impact to an experiment?
Skeptics are penny wise and pound foolish. To demonstrate the power of experimentation, let alone to nay-sayers, is to take a transparent approach. This is where CROs communicate effectively and show iterative learning and the revenue impact a particular test has had.
If Revenue per Visit is now higher because of that A/B testing program or if we’ve identified dollar churn or customer churn and put brakes on the slide with our hypotheses after thoroughly validating them, then that’s something huge. The CRO has to make that business impact evident and bind different cross-functional teams by educating and empowering these teams.
How do you go about calculating that business impact while taking into account false positive rates?
Interpreting results lets us understand the business impact. Negating type 1 and type 2 A/B test results are quite simply a matter of statistical certainty and you keep countering all the validity threats.
Estimations are based on:
- Success rate, which is your recent conversion rates of your control
- Samples received, recent traffic levels and planned number of treatments
- The amount of difference you’re after, which is the business objective in the first place. By a magnitude of 1000% or 5% is to be determined beforehand.
The impact of the A/B test is the ROI on a particular testing program. Quite simply, you remove the costs from the revenue and you have your profit. But, does that stop there? No!
You go on to get the insights from the result and get to understand what does it say about your customer and answer important questions like:
- What motivates my customer?
- How do they respond to specific elements?
- What do my customers value the most?
- What causes them the most anxiety?
- Why are they falling off of a certain point?
- Where are they in the overall conversation?
All of the primary and secondary metrics that you’ve enabled answers these questions and from then on you seek: “In what other places can this learning be helpful?” and you fan out your tests for more learning. And as long as you’re learning, you’re winning. This is the beauty of experimentation, you constantly keep challenging the control and work towards a never-ending impact. Your winner always becomes the control and you keep on challenging these new controls.
How many Experiments should companies run? Is there an ideal velocity?
Experimentation is a gift that keeps on giving. Run as many meaningful experiments that you can on the identified pages.
Many large companies do several thousands of commits per week within their engineering teams. Experimentation is not different, it should not be different and should be scalable as per the firepower one has in terms of resources.
The highest that I’ve done is about 40 A/B tests in 4 weeks, with a pretty solid design, product, engineering, and QA teams involved.
That’s impressive! How many people would that be to support 40 tests in 4 weeks? And what was the nature of those tests?
On the client-side, I had 2 designers, 1 content guy, a product manager and a VP of product, 3 engineers, 2 senior engineering managers, and the head of engineering; 5 QA’s who were world-class with the head of QA. On our side, we were three folks- a senior UX, an amazing engagement head and a CRO/ product lead (that’s me), in our day to day activities of build, measure and learn cycle.
In the mix were a few content tests, form field and layout tests, radical registration improvement tests, all for increasing the sign-up rate. The client is the world’s largest online classifieds portal with over 300 websites world-over.
This program was a huge success, all because of the fact that on a floor of 250 odd employees, we had the heads of each department backing our conversion optimization methodology and our experimentation program had #1 priority. You want to always have the influencers on your side- by coaching them and gaining on their confidence with your expertise.
As you know, documentation is a critical part of a well-run Experimentation Program. From what you’ve seen over the years, what are the best tools for documenting and sharing Experimentation data?
I think the setup of using Figma, Confluence and JIRA works great for complete documentation. You want to get out of Sheets which is so 1990’s to me, and clumsy to scale things to the next level.
Obviously, Experimentation is beneficial for marketing. What are your thoughts about leveraging it for product development?
My experimentation principles are rooted in Lean methodology. You have to test things in batches and ensure that the results are clean and effective.
Toyota was a pioneer of this approach on their assembly lines, they’d build stuff in batches and test for quality rather than do it like a competitor like Ford, which had long assembly lines back in the 1960s and any error would percolate throughout the system and end up being a very costly fix.
Similarly, for product development, you can create a simple landing page and ask your users if they are interested in a particular feature rather than building that all out and then learning that nobody wants to use it. So experimentation helps avoid big and expensive mistakes.
Can you tell us about your favourite Experiment?
My favourite one has to be a dead-simple test that I ran in 2016, making phone diallers blink with animate CSS. It led to a 500% increase in phone leads.
This worked because I knew the audience at the back of my hand, they were middle-aged and not willing to fill a form for what they could’ve called by clicking a button. And the blinking dialler caught their attention because I had it mimicked like a blinking automobile engine check light that they paid attention to in their daily lives.
It’s time for the Lightning round!
Bayesian or Frequentist?
Bayesian, if I have to pick one. Works well for low-traffic tests too.
What is your biggest Experimentation pet peeve?
Many actually, but I’ll mention the top two:
- When can we end the test?
- Can we not have 85% statistical significance? Or Why keep only 95% statistical significance or higher?
If you couldn’t be in Experimentation?—?what would you do?
I wish I was a river, quenching the thirst of millions of humans and all living creatures.
That said, if I have to choose a role, I would continue to code and learn newer programming languages. I also wanted to direct movies and serve in the armed forces.
Describe Sumantha in 5 words or less.
Polymath, inquisitive, reckless, sarcastic, righteous.
Sumantha, thank you for joining the conversation!
Connect with Experimenters from around the world
We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.
- Making the case for data-backed design featuring Brian Massey - September 30, 2023
- Announcing the 2023 Conference Awards - September 24, 2023
- How to Maximize Hypotheses Testing ft. Eduardo Marconi Pinheiro Lima - September 9, 2023