A conversation with TechTarget’s Renee Thompson about experimentation
Experimenting on properties trafficked by B2B customers can be quite different than those of B2C customers. I recently spoke to Renee and explored these differences in more detail as well as how she suggests you handle situations when you don’t have a lot of traffic.
Rommil: Hi Renee, thanks for taking the time to chat today! How have you been?
Renee: I’ve been doing well. I have worked remotely for the past ~3 years, so the COVID situation hasn’t hit me as hard as it has some others. I have noticed that now that my entire company is working remotely, we do a lot more (almost all) video meetings, which is nice. It’s nice not to be a voice on the phone which I’d gotten used to.
I’d love to hear about what you do and a bit about your career journey thus far.
I am responsible for online engagement and conversion optimization at TechTarget. TechTarget operates a network of 100+ websites that provide unbiased editorial content surrounding the B2B technology industry. These sites span 10k unique technology topics, with 1,000+ editors and expert contributors. My goal is to give our audience the best experience when they come to our sites and to promote engagement, including [free] registration. When I started at TechTarget I was working in Product Management, and “A/B Testing” was something our executive management heard about and thought we should be looking into. It started as a ‘side job’ for me, which has grown into a full-time position.
Congrats, that’s a great story!
Could you tell me what the key differences between catering to B2B companies versus B2C companies in terms of Experimentation and Optimization?
We aren’t an agency in terms of website optimization. Our clients are B2B companies who want to reach our unique, targeted audience?—?but my focus is on our site users vs. our paying clients. That said, since we are dealing with B2B users there are definitely differences from B2C.
- Long sales cycle?—?B2B sales cycles typically take 6+ months, sometimes up to 18 months. And this cycle has multiple stages, where buyers (our users) need different types of content depending on stage.
- Offline sale?—?with B2B the sale is made offline, vs. online/via e-commerce. But all the research is done online during those sales cycle phases.
- Buying team?—?unlike B2C where one person is making a purchase (often on impulse), in B2B there is a buying team where multiple people are involved in research and making the decision.
- High risk?—?with B2C, if you buy a pair of shoes you don’t like, you can return them. Or if not, you’re not out that much money if you change your mind later. With B2B these are large, expensive deals which involve integration and many teams and people involved during the implementation process. If you make a bad decision, you’re responsible for costing your company dollars and time, and could even potentially risk your job.
Thus?—?the experimentation that is done at a B2B level is different. B2C focuses a lot on eCommerce and that entire process, shopping cart optimization and abandonment, etc. Where B2B tends to focus more on other types of conversions?—?registrations and leads being two large ones. During experimentation, it’s also important to find ways of reaching people and providing content and experiences that match with each stage of the buy cycle.
This can result in challenges because it’s sometimes difficult to determine what is the conversion point, what metrics to focus on, and where to focus experimentation efforts. It’s often not as ‘clear cut’ as when you’re dealing with an eCommerce experience.
Engagement is a BIG focus for us. And measuring engagement can be tricky. You can (and should) measure things like pages per visit and page or session duration, bounce rate, etc. But it’s also important to look at downstream activity. Since the sale isn’t happening at the same time as the various digital touchpoints, it’s important to try and link them together with adequate tracking and see how they contribute to downstream revenue or other measured business value.
Often at times, stakeholders tend to resist making decisions based on Experiment results?—?what’s your approach to changing this kind of culture?
It’s always difficult when you’re dealing with stakeholders who “know their stuff,” but have expectations that aren’t met during experiments. The good news is that data doesn’t lie. I find it’s often important to combine the quantitative test data with qualitative data so that we have a more well-rounded picture of what’s happening. Not only “the data doesn’t support that hypothesis” but here’s what the users had to say about this. Hearing in users’ own words about how they don’t understand a term or campaign, or can’t find something that seems obvious to internal stakeholders can make a huge difference. And of course, treating everyone with respect!
B2B sites aren’t typically known for having a lot of traffic. How do you Experiment in that kind of situation?
We are actually very lucky because when combining the traffic from our network, we have a huge amount of traffic. And we are able to run experiments across multiple sites since many of them share a common template structure. However, the advice I’d have for others who have a lack of traffic would be:
- Try more qualitative testing, using tools such as Hotjar or Usertesting.com. Hotjar allows you to create heatmaps to see where users are clicking and spending time on your pages; you can also set up surveys/polls and do user recordings. UserTesting.com is a great tool (although more pricey) that allows you to do online usability testing and find out from your users what they are thinking, how they feel about your site/messaging, if they can perform tasks, etc. (Another interesting one is VerifyApp, where you can put a design out and get a panel of users to answer questions about it. It can save a lot of work and time if you’re trying to determine if a design can work, if users understand language, etc.)
- Be really smart about prioritization. With small traffic, tests take longer and you really have to choose the ones that are going to have business value. Put a value proposition together with expected business value (conversions, revenue, savings, etc.).
- Don’t forget about other channels. Email is still a really important marketing channel, and there is a lot you can do with experimentation there even though you don’t typically use an a/b testing tool. You can also do lots of testing with social channels and engagement.
Even more generally, how do you decide where to start Experimenting altogether?
You need to make sure you’re aligning with your core business KPIs and start from there. Anything that doesn’t have potential ROI around one of these is not worth doing, especially as a starter.
Some people like to say to start with low hanging fruit. This can be appealing and can be helpful because it’s often the most visual, demonstrative things where people can get immediate results and see “aha!” what happens (like changing messaging, calls to action, colours). But these run out soon and the real value is usually on more complex projects (at least for us).
Stakeholders typically want to see positive ROI, granted I don’t know which companies don’t. When someone asks to understand how a series of Experiments is impacting their bottom line?—?how do you go about answering them?
We always have a hypothesis and potential business value outlined at the start of a testing project; in fact, this is how we prioritize our testing queue. So we can show expected value ahead of time, and then during a test and post-test we are able to determine if it met expectations.
However, it’s important to remember that testing is TESTING… and not all tests are going to be successful. Sometimes great knowledge comes from the failed experiments. These are valuable because you didn’t expend a large amount of development effort on something that wasn’t going to perform up to par, and because there are always insights to be gleaned when a test fails.
As I’d mentioned above, it’s important not only to look at how a specific test interaction is performing but now it impacts downstream activity, revenue, etc. When we conduct a winning test that results in us requesting our engineering team build out the experience in a production environment, we always model out how we expect it to impact the downstream results. Then, once something is built in production, you can and should be looking at the performance so you can tie it back to the test results and see if it is indeed performing to expectations (and if not, look into why). Typically we see things perform even better in production than they did in testing.
For those that are new to Experimentation?—?how do you set them up for long term success?
All of the above 🙂
You have to have a process defined: from prioritization to resources to tools.
Document the expected business value for each idea, and the estimated effort. Weigh these against other opportunities to determine what to work on in what order.
Determine who will work on testing initiatives. Do you have a dedicated team? Do you have “leftovers” who are primarily assigned to other tasks? If you can’t get resources, you won’t make much progress.
Decide what tools you want to use. We use Optimizely, along with multiple qualitative tools. Whichever tool you choose, make sure you have resources who become experts and can be efficient with the work.
You need to get upper management support. In order to do this, you have to speak their language. Make it about what you can do, what problems you’ll solve, how you’ll impact KPIs. They don’t want to get bogged down in details.
Finally, it’s time for the Lightning Round!
Frequentist or Bayesian?
Ugh. Honestly, there is so much debate on this and I could argue for either one. At the end of the day, I really think you’re arriving at generally the same answer either way. I’d rather spend my time putting good ideas and business cases together!
What is your favourite Experimentation tool?
Optimizely
What is your biggest pet peeve in Experimentation?
When people say they “tested” something, but when you ask for the details you find that their test consisted of MUCH less than statistically significant data and/or their premise was flawed. “Oh, we ran this one campaign and it was 30% better than the other one, therefore this is the type of messaging we should use moving forward.” When asked about the data set, “oh it was one email campaign from one day that sent people to a landing page. 18 people clicked.” 🙁
Describe Renee in 5 words or less.
Good listener, loyal, curious, tolerant
Thank you, Renee, for joining the conversation!
Connect with Experimenters from around the world
We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.
Related posts:
- About our new Shorts series: CRO TMI - November 8, 2024
- Navigating App Optimization with Ekaterina (Shpadareva) Gamsriegler - October 18, 2024
- Building Your CRO Brand with Tracy Laranjo - October 11, 2024