Maulin Shah shares his key elements of an experimentation roadmap

Hi Maulin, thanks so much for taking the time to chat with us! How have you been?

Hi! I am doing well. Thank you for reaching out and taking the time to chat with me. I am really excited to be a part of the Experiment Nation group and networking/sharing insight around everything CRO. :)

Let’s start off with a bit about yourself. Could you please share with our audience what is it that you do and a bit about your career journey up to this point?

I am currently a Product Manager overseeing the Digital Data Strategy for a well known Health Insurance Provider. As part of my data strategy I am helping provide guidance with Analytics, Data Governance, CDP to drive Customer Engagement (via Email, SMS, Push), visualize Customer Journey (via Analytics) and Personalization. I started my career out on a technical track serving in roles as a business analyst, QA Analyst and ultimately ending my "technical" career as a Development Manager overseeing a team is over 10 front-end and back-end developers. I started my path in Digital Marketing, specifically Analytics, Tag Management and CRO in 2011. Since, then I have been building CRO Programs/Teams and executing CRO strategies and experiment's from the Retail, Telecom and Health verticals.

What’s your approach to using Experiments to help develop features and strategies?

My approach to using Experiments to help develop features and strategies is to truly understand the underlying business goals and tie those with available data (both quantitative and qualitative).

What are the key elements of an experiment-driven roadmap? What has been the reception to this approach? Particularly from HiPPOs?

In my opinion, the key elements for an experiment-driven roadmap are defined processes, testing/experimentation governance model and data-driven mindset. Having put these things in place from my experience has helped me establish and grow CRO programs. Taking this structured approach (customizing it based on organizational structure and maturity model allowing for fluidity) has typically yielded positive results. I have also found this approach useful when met with resistance that is being steered by company politics or by the "HiPPO". This reminds me of a comment I read sometime back from the former CEO of Optimizely Dan Siroker. "A/B testing is like a kryptonite for those who don't want to change". Mr. Siroker stated in this article that companies need a "champion for testing" before they can reach optimization maturity. He goes on to say "in order to initiate change in culture the right elements need to be tested and the test must be planned, implemented" and measured correctly as well as using the results to properly inform future initiatives and I totally agree with his sentiment.

What effective strategies have you discovered thus far for sharing learnings?

The 2 most effective strategies that I have discovered for not only sharing learnings but to keep all stakeholders/business channels engaged and involved is by first, creating a "Experimentation/Testing Library" document that is maintained and a living document that contains a one-pager on all the experiments that have been run. The "one-pager" would contain the experiment run date/time, hypothesis, wireframe of the control and alternate experiences, the KPIs and any segmentation information. Lastly, it would include high-level reporting and a "story" of the learnings and possible next steps (optimization). The second, way is to schedule bi-weekly (caidence can vary) "Optimization Check-in" calls with key stakeholders. The agenda for those calls are to review 3 things. First, what tests are currently in the queue (like a backlog). Second, what tests are currently running. Last, what tests have completed since the last meeting. This is the step were I would share the results and learnings and spark my stakeholders to start conversing on the results and learnings to help shepard the "optimization" cycle.

Finally, it’s time for the Lightning Round! Are you a Bayesian or a Frequentist?

Bayesian.

If you couldn’t work in Experimentation, what would you do?

If I couldn't work in Experimentation I would continue working on Data Governance, CDP, Analytics and Tag Management. In my current engagement I am assisting a major Healthcare provider define and execute a roadmap for consumer data to enable meaningful campaign personalization via targeting, segmentation and predictive modeling. While completely outside of the "Experimentation" world I do find my "Experimentation" side rear its head quite often. :)

Is there anyone you'd like for Experiment Nation to interview in the future?

Kelly Wortham of the Test & Learn Community

Awesome! We'll see what we can do. Thanks so much for chatting with me today!

Previous
Previous

ConvertCart's Shaivi Sahay on figuring out what to test first

Next
Next

Willful's Tracy Laranjo on how the Experimentation Community has made her a better Experimenter