A Conversation with Journey Further’s Jonny Longden about Experimentation
I recently spoke to Jonny about how 9 out of every 10 changes made to a website is a waste of money, what a nightmare client looks like, and how he established the Experimentation program at Sky.
Rommil: Hi Jonny, how are you? Thank you for taking the time to chat today! How have you been considering all that is going on these days?
Jonny: I’m good thanks very much. Holding up pretty well during lockdown. There’s a lot I really like about it, such as being around the family a lot more and being able to see the kids whenever. However, I absolutely hate the fact that we simply don’t have the time to do all the homeschooling stuff we’re supposed to or even really pay them the attention they need.
I feel that, totally. I’ve been struggling with that myself. Beyond that, how have you been handling working remotely?
I used to really hate working from home though but have gotten very used to that now, so that’s a good thing as I’m managing to be very productive on the whole.
Great to hear! So, how about you tell us a bit about yourself and what you do.
I’ve been in digital analytics and conversion optimization/experimentation pretty much since it began really, or at least since A/B testing began. I ran my first A/B test on a website in about 2007 or maybe ‘8 just after Google Website Optimizer came out. Since then I have had a variety of roles both agency and client-side, all focused on the same thing.
My most relevant experience though is that I built and ran the experimentation function in Sky, which was and has since continued to become a very innovative and pioneering example of an in-house experimentation function.
I now run the experimentation and conversion division of a broader performance marketing agency called Journey Further. I started the business because I want to help other companies achieve what we did at Sky.
Outside of work I’m a Dad to two awesome boys, 4 and 6. I love music of all kinds, German Philosophy and, recently, have been quite into fitness
You’ve stated that “9 out of every 10 changes you make to your website (without experimentation) is a waste of money or could damage your revenue.” Could you elaborate on this and why do you think companies seem to be OK with this fact?
That comes from a piece of research conducted by Stefan Thomke (author of the excellent book, ‘Experimentation Works’) in conjunction with Optimizely. They gave him a huge load of data from experiments run across their platform, and he analyzed it to understand what the outcome of those tests was. Only 1 in 10 experiments run across their entire platform actually produced a winning result.
This is such an important piece of data. What this tells you is that, of all the tonnes of ideas which people had about stuff they should do to their websites, all of which would have seemed like a great idea at the time or even been based on hard research, 9 out of 10 of them didn’t work.
This means that, if you’re not testing, 90% of what you do to your website and the money you spend to do it, is a waste!
I doubt most companies are OK with this, rather they just don’t know it. Most companies have a culture of fake success: they come up with ideas; push them to production; then try and measure the results. Because they have already invested in the work to do it, whoever was responsible is going to find a way for it to seem successful.
There are actually so many reasons why companies can’t see this though, which I could literally write a whole book on. I kind of see it as my mission to try and open people’s eyes to this.
Totally. I always tell people that despite our years of eating, we can’t predict what we will crave tomorrow. What hope do we have to predict what will work in a business context?
Connect with members of the Experiment Nation Directory
|Photo||Name||Location||Short Bio / Specialities||LinkedIn URL|
|Sandeep Shah||London||AB Testing, Vendors, SaaS, Development, Product||https://www.linkedin.com/in/sandeepshah89/|
|Josephus (Joey) AYOOLA||Brussels Metropolitan Area||Digital growth strategist with a huge knack for experimentation.||https://www.linkedin.com/in/josephusayoola/|
|Theresa Smith||Milwaukee, WI||I am focused on driving and building digital initiatives through analytics and optimization. I enjoy both the test management piece and developing programmatic strategy. As a practitioner, I enjoy building out hypothesis libraries and spending an afternoon digging deep in an analytics workspace.||https://www.linkedin.com/in/theresabsmith/|
Changing gears a bit. How do you measure and communicate the success of your work to your clients?
The most important thing we try to get across, as early as possible, is that experimentation should not be valued on finding ‘winners’. That is a legacy thing which comes from the old fashioned notion of ‘CRO’ where you might hire a ‘hacker’ to try and trick your website into making more money.
Proper experimentation is about learning. When you see it like that, the only way a test can possibly ‘fail’ is if you fail to learn from it. For example, if you didn’t do it right or broke the test; or you have failed to analyze the data and learn from it afterwards.
We have metrics which help show the extent to which we have learned and fed into bigger and ongoing tests, but we also make sure we measure both the positive revenue uplift of winning tests and the positive saving of not implementing changes which were not proven.
How often are they surprised, or even argue with you, with the learnings?
I myself am constantly surprised by a very large number of tests that we run. This is one of the reasons why I so strongly believe that opinion is almost worthless and is something that anyone who has ever done serious testing will also have learned. There is no such thing as a no-brainer.
Clients can be similarly surprised, however, I find they very rarely argue. I think this is something you find much more frequently when you are working client-side, as you tend to get far more immersed in the politics of stuff. It’s quite interesting to see how test results can create so much controversy, especially with creative people and designers.
What is the most common low-hanging fruit that companies overlook?
There isn’t one. There is no such thing as a best practice or low-hanging fruit. Even the most obvious seeming things can and will fail. It happens all the time. Do the research properly and figure out the best opportunities from that.
If there is one thing they should focus on for definite, it is building a solid experimentation capability that drives 100% of their development investment.
I have to ask. What’s your favourite Experimentation platform and why?
I honestly think it depends on the client, the situation etc — however I would probably have to say SiteSpect. I first bought and used SiteSpect whilst working as eCommerce Director for The Principal Hotel company. It works very differently from other tools and is kind of server-side. You build a test by simply finding and replacing the text in the source code that is being returned to the server. I’m not that proficient a coder but this instantly made more sense to me and gave me tonnes more flexibility, plus we had a lot of Ajax going on with that site which made Optimize etc useless.
Define a nightmare client — and what’s your approach to winning them over?
The worst kind of client is one who ultimately believes that their opinion and intuition are better than any kind of data, research or even testing. They’re rare but they do exist.
In a lot of cases you will never win over this person, but then you are probably not going to ever work with them anyway, as they wouldn’t hire anyone to do this stuff in the first place.
The ones who can be won over, as with anyone who doesn’t at first believe in it, are always swayed by seeing a very strong conviction disproven through a test. Everyone cares about money and their reputation, so why would they want to do something which is going to have the opposite effect to what they thought?
Now, in contrast, what is the perfect client?
The client who truly gets the learning aspect of experimentation. That is ultimately what it is all about.
With so many CRO agencies around — what sets Journey Further apart?
Whilst at Sky I had a tonne of CRO agencies come to present to me at one point or another. They were all impressive in their own ways but a couple of things struck me:
- They never seemed to have much strategic competence, by which I mean the ability to talk about digital strategy at a higher level. For me, that is where experimentation should both come from and aim towards. If you don’t understand the business (profitability and direction etc) then how can you optimize it? If you’re not aiming for experimentation to get to the point of influencing pivots in business model or value prop or whatever, then you have not grasped the learning potential of it.
- They all wanted to help us by going away and doing their thing ‘in a black box’. I honestly think experimentation can only truly work properly when it’s embedded in the client. Over the last 10 years, businesses have gone from seeing ‘digital’ as something to outsource to it being the beating heart of their business. They, therefore, want to control this stuff internally, even if they need help and outside resources to do it. This is only going to go more in this direction.
For these reasons, I set out with the intention that Journey Further is about helping clients to become experimental themselves, and to properly embed it into their business strategy.
We are therefore in some ways more of a consultancy than an agency, although we can and do provide the full agency model. We are all (at the moment) ex-client, from Sky, Asda, Tesco and Travelopia, so we understand how clients work better than most.
Can you tell us a bit about what inspired you to build out Experimentation over at Sky?
Before I was hired by Sky, they had separate digital teams dotted all over the business, in different siloes. It was a mess and was very inefficient, so they made the wise decision to create a big centre of excellence; a new office in Leeds in the North of England. I was hired right at the beginning of that process and helped establish the whole thing.
My remit was actually quite ambiguous: my objective was simply to ensure that the investment in development was going to be commercially impactful. This was a golden opportunity: because we were literally starting from scratch with a new office, people, processes and everything, there was no legacy notion of what that should mean. I therefore just brought all of my background experience to bear in answering the problem and decided to create the most perfect experimentation function I could. I got halfway there!
It’s time for the Lightning Round!
Bayesian or Frequentist?
Where do you see Experimentation at Journey Further in 5 years?
As I mentioned, I think (hope) clients are going to continue to wake up to the need to take this stuff seriously, so I think our consultancy practice will continue to grow as we help bigger organizations with the management side of things.
Describe Jonny in under 5 words.
Just test the bloody thing
Jonny, thank you for joining the Conversation!
Thank you! It’s been a pleasure.
Connect with Experimenters from around the world
We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.