Gartner’s Shiva Manjunath on championing data over opinion and bias

A Conversion Conversation with Gartner’s Shiva Manjunath

Simultaneously managing the CRO program for multiple companies is no trivial task. It takes a great deal of organization, focus, and stakeholder management. I recently had the good fortune to speak with Shiva from Garter about how to generate and prioritize test ideas, overcome the challenges of working with so many functions, and how he (bravely) used Experimentation to challenge the long-held beliefs of senior leadership.


Rommil: Hi Shiva, thank you for taking the time to chat today!

Shiva: No problem! Happy to talk CRO with anyone and everyone.

For the benefit of our readers, could you share a bit about yourself and what you do?

For sure. I’m a Program Manager for CRO at Gartner. What this means is everything conversion rate optimization, I own and project manage CRO for three companies?—?GetApp, Capterra, and Software Advice. Each brand has different value props and audiences, so the fun part for me is that I have three unique challenges rather than just one!

“…a large amount of input for our test ideas from other departments (engineering, UX, marketing, customer service, etc.) because they all have unique and valuable perspectives...”

Wow?—?with so many challenges, how do you generate and prioritize ideas?

For me, test generation is a hybrid of qualitative and quantitative research. Quantitative research largely comes from analytics data. A funnel analysis, identifying pages with high bounce rates, highly trafficked pages, etc. are all focus points for me to then switch gears to qualitative research (on those pages) to understand what elements resonate with people, and potential ways to optimize behaviour. Things like session recording can be invaluable, but you need to start from a point of data?—?otherwise, you’ll spend hours watching people play with your site with no tangible next steps from it. That’s why I like to start by narrowing my ‘qualitative’ search using quantitative data. We also get a large amount of input for our test ideas from other departments (engineering, UX, marketing, customer service, etc.) because they all have unique and valuable perspectives which we sometimes don’t see from a pure data analysis.

In terms of framework, I have a prioritization model which takes a number of factors into account. In no particular order, potential revenue loss (i.e. level of disruption of the test), engineering resources required, time to test, as well as a few others are all things needed to consider when prioritizing tests in the queue.

“Long term success for CRO means collaboration, and getting valuable input from many functions in your organization.”

What are some of the obstacles you face in terms of running Experiments?

A few come to mind. One of the problems I see often is a siloed approach to CRO. CRO isn’t just ‘marketing’ or ‘engineering’. Long term success for CRO means collaboration, and getting valuable input from many functions in your organization. I’ve won CRO tests where I shared the winnings with the marketing team, and that drastically changed THEIR campaigns to capitalize on those learnings. And they’ve run studies which have instructed a lot of my own CRO tests. So that cross-channel collaboration is vital to long term success for the program. Another problem I see too often is resource allocation. Working with engineering and design teams is essential for a testing program’s success, but as we all know, every other team is fighting to get resource help from them too. One of the ways to mitigate that is by hiring dedicated engineers and designers for your CRO team.

That leads me to ask what role does an Experimental culture play at an organization?

A pretty damn big one. However, experimentation is a core pillar of ‘data-driven decision making’. Meaning, a proxy of getting data for your decisions is by experimentation. There are other ways you can get data, both qualitative and quantitative, but experimentation is one of many avenues for getting data to drive those decisions. I’ve been part of organizations where a HiPPO (Highest Paid Person’s Opinion) has dominated decision making positions, and it can be very challenging to run an experimentation program in that type of culture. Thankfully, more and more companies are realizing the power of data and are shifting more towards data-driven decision making, and using experimentation to fuel those decisions.

“Test to learn, not test to win. If you’re testing to learn, you will win far more than if you just test to win.”

Do you have any advice in terms of sharing Experiment-related learnings with everyone?

I think the biggest one for me is transparency into the program. I use a roadmap tool for my testing program, and everyone within the organization has access to it. I’ve created different views for what different people may need to peek at. For example, the brand team may want a more simplified view with just pictures of what the test looks like, with the results document attached to it. The engineering team will need to know way more details about the nitty-gritty execution of the test. However, everyone has access to see exactly what tests are running when, what the concepts are that we’re testing, links to test results, etc. all in one consolidated tool.

Changing gears. What career advice would you give anyone thinking about getting into CRO?

Have a proficient understanding of HTML, JavaScript, CSS, and how your testing tools work. This helps you with running and iterating off of tests more efficiently and quickly. More importantly, understanding how the testing tool works, and how the code will be injected to make changes, will help bridge the gap between ideation and execution of tests between the CRO team and the engineering team.

Speaking of careers, I’ve read you studied Neurobiology. How did you end up CRO?

I grew up in a veterinary family?—?my dad and sister own a veterinary clinic back home. So I grew up around animals and medicine my whole life. When it came to deciding what I wanted to do for a career, I thought I wanted to go into medicine. After getting my undergraduate degree, I realized that I really wasn’t that passionate about medicine to want to do that for the rest of my life. Thinking I’m pretty good at technology, and marketing has always interested me, I applied for (and got) a job at a digital marketing startup. Working closely with the CEO there, and just being immersed with digital marketing and website design/testing, I knew that’s what I wanted to do for a pretty damn long time.

With so much change in the industry, what are your top resources for CRO news and best practices?

I actually really hate the term ‘best practice.’ It’s a huge pet peeve of mine because it’s really deceptive. Best practice implies that it’s something you should just do without testing. You should never just blindly adopt ‘best practices’ on your site without actually understanding its impacts. However, you can definitely get inspiration for test ideas based on test successes which you, or other people, have gotten.

That’s a great point. I should scratch that term from my vocabulary!

To actually answer your question though, I usually default to UX research as one of my top resources for understanding how user behaviour is changing, and how to accommodate tests for these changes (big fan of Nielsen Norman).

Ah, Nielsen Norman. I think I still have a bunch of his usability books on a shelf somewhere. Love his stuff.

One of the first Usability books I ever owned: https://www.nngroup.com/books/prioritizing-web-usability/

You’ve obviously seen a lot over the years. Do you have a favourite Experiment-related story?

A few moons ago, I joined an organization to run their experimentation program. The norm had been ‘run a test for 2 days, see what’s winning, push that as a winner, and continue to the next test.’ It was shocking, but the senior executives had very little understanding of the statistics behind A/B testing. So I came up with an idea. I ran an A/A test instead of running the test they wanted to run. They peeked and saw the variation had a +20% lift to CVR after 2 days and told me to push that variation to 100% and iterate. I asked them ‘are you sure’? They replied back ‘Of course! Why wouldn’t we capitalize on a 20% lift to CVR?’ I told them the truth about the A/A test, and how both variations were exactly the same. After explaining that the test needed to run for a longer period of time due to severe fluctuations early in the test launch, they agreed to let me run the test for the needed sample size. Once it ended, we saw that there was no statistically significant lift to CVR. That’s when it registered with them that making decisions early in the testing process is really, really bad. They never peeked again ;)

Shiva. I love the way you roll.

On that note, it’s time for the Lightning Round! Frequentist or Bayesian?

Bayesian.

I should have guessed. Google and all. LOL

RIP my Stanley Cup dreams. Source: http://www.sportsclubstats.com/NHL/Eastern/Atlantic/Montreal.html

What would you be if you couldn’t be a CRO?

Doing analytics for an NHL team. I geek out on data, and super geek out on Hockey, so perfect pair!

Just don’t check out the Habs this season. They’ve been brutal.

Finally, describe Shiva in 5 or fewer words.

Hilariously weird optimization nerd.

Shiva, thank you for joining the Conversation!

Thanks for having me!


Previous
Previous

Vipul Bansal: Experimentation is the right way to be wrong

Next
Next

Voiceflow’s Emily Lonetto on powering past things that didn’t work and the importance of focus