A Conversion Conversation with DURMC’s Naoshi Yamauchi
I recently spoke to Naoshi about his first A/B Experiment, how he nurtures an Experimentation culture with clients, and an often overlooked piece of information when selecting an Experimentation platform vendor.
Rommil: Hi Naoshi! Thank you for taking the time to chat today! How have you been?
Naoshi: Considering the current situation, I feel very grateful for the extra time with my family and our health.
Happy to hear that. For the benefit of our readers, can you share a bit about your career journey and what you’re up to these days?
My first a/b test was in 2008 when I worked in the business intelligence group of a 75 person start-up in Raleigh, NC. I was determined that our sign up page, though beautiful, lacked fundamental elements to convert visitors to sign up for an account. I offered up my suggestions to the head of creative/UX at the time and he shut me down very quickly.
Ha. Been there. How did you get around that?
I was so determined, I went directly to one of his direct reports and asked him to design a quick landing page based on my paper sketch. He said he had 20 minutes and put one together. It was so basic, I could have done it in Microsoft Paint. I then went to my colleague running paid search and had him split 50% of the traffic for one of the sign-up campaigns to my landing page. My crappy looking landing page beat the control by 37%. I brought the results to the Creative Director and he couldn’t believe it. I offered to show the results to other executives and he said there is no need for that. He himself polished up the landing page and the rest was history. I was hooked on a/b testing.
I love that story. Sometimes, and within reason, of course, you simply have to be constructively defiant to prove your point. Where did you go from there?
I went on to become the first analyst at Brooks Bell and had a great 10-year journey there with my last role being CEO. I was able to make a lot of positive impact as well as make a ton of mistakes.
After stepping down from the company, I started DURMC, which is a consulting company based in Durham, NC. I focus on advising small companies with growth as well as consult for enterprise companies with scaling their optimization programs.
Could you share the top three challenges you’ve faced as you nurtured a culture of experimentation with your various clients? How have you managed to overcome them?
Challenge 1: Culture is about believing and behaving. Most companies say a/b experimentation is very important. However, if you look at the resources and priorities set in place, it often does not reflect that sentiment. One way to bring awareness to this is by showing what other ‘best in class’ programs looks like when it comes to how much investment in technology, people, and time are spent on experimentation vs that particular client. Giving concrete examples of team sizes or dedicated resources for experimentation between top programs vs theirs gives clarity into their gaps.
Challenge 2: If the executives are not bought in, it is usually a losing battle way down the chain. As an outside agency, it is important to form relationships with senior executives of these companies to build trust and find opportunities to discuss and get executives excited about impacting their critical metrics. The higher up you go, a/b testing is a vehicle to getting outcomes and hitting goals. Making sure the executives are clear on the impact of experimentation is critical.
Challenge 3: Lack of education. In large organizations, there are pockets of highly educated and passionate people around experimentation. To raise the culture across the company, making sure there are plenty of efforts made to share wins and teach the basics of experimentation to others is key. Try doing a ‘roadshow’ within the company to teach and share to key stakeholders and groups.
“If the executives are not bought in, it is usually a losing battle way down the chain.”
How do you suggest companies measure their progress in terms of driving this culture?
I am very performance-based, so making sure programs track their benchmark metrics such as number of experiments per month, insights gained, estimated impact and track them over time.
As companies start to invest in Experimentation, they often want to measure ROI. Do you have any suggestions as to how they should set their KPIs to demonstrate this?
I shared the basic performance metrics in the previous question. One advice here is to find the top KPIs and keep it simple. I have seen too many programs try to track way too many different metrics around their success and it can become a deterrent to rapid progress.
“…find the top KPIs and keep it simple.”
As companies evaluate different experimentation platforms. What are some of the things that often get overlooked when selecting a vendor?
This is going to sound basic. Get references around customer support. There are no platforms out there that will work perfectly. Things will go wrong, especially as you ramp up and it is very important that you are able to get quality support with issues in a timely manner.
This is great advice. I think all Experimenters at one time or another have hit their heads against a wall trying to reach technical support.
What kinds of things are you looking forward to from an experimentation-perspective in 2020?
Our world has been turned around quite a bit lately. I am curious to see how businesses, for the ones that took a major hit, place experimentation into their priorities as they ramp things back up.
For those newer to this field, are there any thought-leaders in this space that you’d suggest that they follow?
Rhett Norton has been putting out a lot of helpful content over this past year on YouTube
Ya Xu at LinkedIn, though I don’t know her well, is crazy smart and definitely one of the leaders in this space
Vito Covalucci at Capital One is a really fascinating guy with lots of thoughts around experimentation and machine learning
Awesome. Thanks for taking the time to chat today, let’s wrap things up with a quick Lighting round.
3rd party solutions or in-house solutions?
What is your favourite data-visualization tool?
No particular favourite.
Bayesian or Frequentist?
What is your biggest pet peeve with regards to Experimentation?
Lame test ideas
Describe Naoshi in 5 words or less.
Dad, husband, caring, evolving, competitive.
Naoshi, thank you so much for joining the conversation! It’s been a pleasure to chat with you.
Thanks for taking the initiative to do this Rommil. It shows great leadership!
You may also like