A Conversion with Jobber’s Connor Bradley about Experimentation
As Growth teams…grow, it becomes increasingly difficult to coordinate all the Experiments they run. A strong strategy is needed to ensure that everyone is on the same page and not running into each other. I recently spoke to Connor about how he empowers his team to drive growth at various stages of the funnel without colliding or creating unwanted bottlenecks.
Rommil: Hi Connor, how’s it going?
Connor: Hey Rommil, I’m okay. Thanks to 2020, I’ve permanently replaced ‘good’ with ‘okay’. Quarantine and social injustice will do that to you. With that said, I’m excited to chat with you about experimentation & growth!
Likewise! So let’s start with the customary question about what you do and how you got to where you are today.
Sure, I’m the Manager of Growth at Jobber. Jobber is a B2B SaaS company that provides software to the home services market to help organize & growth small home service businesses. Our headquarters is in Edmonton, AB and we have our second office in Toronto, ON.
The journey started in Edmonton, AB. I was hired at Jobber out of university into an everything and anything marketing role as we were at about 30 employees at that time. After a personal move to Toronto I found myself at a few other roles which started to focus my skill sets into revenue driven marketing and this is where I fell in love with data and the processes and workflows surrounding it. Once Jobber opened a second office in Toronto I had the opportunity to come back and be a part of building the growth arm of Jobber. 3 years later here we are with an exciting opportunity and over 200+ passionate Jobberites!
Having been in growth for a few years now, can you tell us what role Experimentation has in driving growth?
Experimentation is about testing decisions that involve uncertainty. In many cases, managers are uncertain about opportunities and they lack the data to inform strategic decisions. Through experimentation, you are able to slice out individual assumptions you have and validate them.
Pairing experiments with growth metrics is a match made in heaven. Instead of jumping into a decision, you can validate it through an experiment that informs the revenue or unit impact on the business.
“Experimentation is about testing decisions that involve uncertainty.”
In the early days of Growth, traffic is pretty low — especially before you nail down product-market fit. How do you run Experiments in that kind of situation?
In many cases, you don’t. Running experiments with low traffic will limit your ability to make decisions, especially for early-stage companies. Of course, you can look to identify experiments that will return larger results to mitigate the amount of traffic needed, but not all management decisions can or should be resolved by experiments early on.
Connect with members of the Experiment Nation Directory
|Photo||Name||Location||Short Bio / Specialities||LinkedIn URL|
|Jamie Willmott||UK||Optimisation, User Research, A/B Testing.||https://www.linkedin.com/in/jamie-willmott-375677b7/|
|Geoffrey Bell||Charlotte||A/B Testing Strategist||http://www.linkedin.com/in/geoff-bell-62a03617|
|Annika||London||Worked in CRO and digital marketing for almost 8 years. Last 4 in a CRO agency as an Account Director/Analyst/Project Manager.||https://www.linkedin.com/in/annika-thompson-51197050/|
As your team starts to run more experiments, how do you distribute authority among experimenters so that you don’t create unwanted bottlenecks or run into each other?
Typically a growth manager owns all experiments. At Jobber, we’ve distributed ownership to each team member. An experiment owner is accountable for the documentation, communication, execution, and analysis of an experiment.
Experiments can be anywhere within the funnel. This requires quickly shifting mindsets into new data sets, processes, and workflows. To help focus, I look to assign drivers to specific areas of experimentation. Here, a member owns the sequential testing within an area while concurrently supporting other areas of experimentation that are driven by other team members.
I want to touch on communication. How do you go about sharing the learnings from Experimentation to ensure the same mistakes are not repeated?
Before an experiment runs on our team, it needs to be documented and reviewed. We have experiment templates that ensure each experiment has a hypothesis, baseline data, prediction, next steps, and results. During this stage, we share our documentation to all key stakeholders before we ship. This ensures we don’t have any surprises or red flags pop up while something is live.
After an experiment, we collect our learnings and share them into our public channel. We will give a brief description with the overall result. The post will also link out to the full experiment document and we may accompany it with a Loom video to showcase the changes we tested.
Each month we bundle together our key learnings from experiments and do a monthly review, this review is also recorded and shared into our public channel in Slack.
A question I love asking growth leaders is how to organize their teams. How would you describe your dream Growth-team?
Speaking from a SaaS perspective, my dream growth team is cross-functional. It is a mix of engineers, designers, and marketers. In addition, the team requires data analysts. We can only self serve so much and need their expertise to charge our ideation & analysis.
These teams should be no more than 6 people maximum (ours are 4). One, we move faster thanks to quicker feedback loops and communication. Second, our ideation and minimum viable tests are already scoped down with fewer engineering resourcing. This forces our hand in thinking about simpler ways to validate our assumptions to keep up our pace of experimentation.
A hot topic in this space has been Bayesian vs. Frequentist — with many Experimenters exploring both approaches to see which makes sense to their businesses. I was wondering what your thoughts were on this debate.
WOOF! I remember spending a full day looking at the differences. After reading through all the wild debates from the statisticians of the world we went ahead and looked at running our experiments using both approaches. Sample sizes can be difficult to come by in certain areas of the business. Leveraging a Bayesian approach that includes prior knowledge, or even beliefs, when calculating the probability of the event can come in handy when this happens.
For most tests, we don’t need 100% certainty. We are not in pharmaceuticals saving lives or looking for COVID-19 vaccines. Our team is willing to try different methods to continue to add to our learning behind experiments. We’ve run into issues with both approaches but we are slowly coming to grips with some decisions being driven by risk tolerance over statistical significance.
Finally, it’s time for the Lightning Round!
For those who are new to the Growth space, who would you say are the thought-leaders today?
I call them the Reforge Mafia: Brian Balfour, Casey Winters, Andrew Chen are the OGs. Click on any program to find other EIR’s teaching growth at Reforge (highly recommend)
Oh Casey’s great! He trained all the PMs at Ritual — wicked sharp guy.
What is the biggest myth in growth these days?
We’re just a bunch of hackers
Oilers or Flames?
Describe Connor in 5 words or less.
Conner, thanks so much for joining the conversation!
Connect with Experimenters from around the world
We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.