You hear it often, “follow Best Practices” – where a best practice is something that has been generally accepted as the standard way of doing things to help maintain a certain level of quality. Interestingly, you’ll also often hear CROs arguing about them. But why would CROs, who are dedicated to improving the quality of customer experiences, argue about something intended to maintain a certain level of quality? Let’s look at this in a bit more detail.
The word “best” is thrown around pretty liberally. People claim that their solution is the “best”, that Michael Jordan was the “best”, and that Tina Turner knew who was “simply the best”. But what is the “best”? Best at what? Best for whom? With almost 8 billion people in the world, it is highly unlikely that what is “best” for one group of people will be what another group of people would consider the “best”. As a planet, we can’t even unanimously agree on whether Coke or Pepsi is the “best” (it’s Coke) or who are the “best” political candidates. That’s because “best” is relative – we all have different life circumstances, different values, and different goals in life.
With that said, there is value in learning and trusting in the results of others’ work. I trust that my seatbelt will work so I don’t have to do my own research when I drive a car. I trust certain movie critics’ opinions so I don’t have to see every movie myself to form an opinion. However, there are limits.
For instance, I wouldn’t trust a movie critic’s opinion on whether a seatbelt was tested adequately. Or a mechanical engineer’s opinion on whether I should see the new Marvel movie (or maybe I should, who knows?) Why is that? Because their opinions in different contexts are not as trustworthy. They’re still smart people, but their opinion in those new contexts should be taken with a grain of salt. Your mileage will vary.
Interestingly, the US Department of Health and Human Services has different kinds of Best Practices: research-validated best practices, field-tested best practices, and promising practices.
As the names imply, research-validated practices are backed by very comprehensive research and have “conclusive data from comparison to objective benchmarks with positive results” as well as “conclusive data from a comprehensive and objective evaluation by an external, qualified source (most often an academic institution or individual with the appropriate academic credentials).” In other words, a whitepaper from the internet probably wouldn’t qualify. I’d argue that are virtually no CRO best practices that fall into this category.
As for the remaining categories, Field-Tested Best Practices are those that have been tried and seem to work. Though the recounting of the efficacy of these approaches could be up for debate. This is where I’d say most of the CRO industry’s advice falls. Finally, a promising practice is one that works for one organization and has the promise of working in others. I’d say this is where most CRO case studies fall.
At the end of the day, best practices, regardless of their nature, are useful as a starting point for exploration and not a substitute for real-world experimentation. Unless a best practice clears the very strict threshold of having conclusive data evaluated by someone with appropriate academic credentials – it’s in your best interest to just run the test because you might actually learn something unexpected by doing so. After all, that’s why we Experiment – to learn.
Good luck and see you in 2 weeks!
Founder, Experiment Nation
Connect with Experimenters from around the world
We’ll highlight our latest members throughout our site, shout them out on LinkedIn, and for those who are interested, include them in an upcoming profile feature on our site.
- Adventures in Experimentation (9)
- Analysis (2)
- Announcements (48)
- Ask the Experiment Nation (11)
- Certified CROs (193)
- Community (2)
- Conversation Starters (2)
- CR-No (5)
- CRO Handbook (4)
- CRO Memes (18)
- Experiment Nation: The Conference RELOADED (41)
- Experimentation from around the world (2)
- For LinkedIn (121)
- Frameworks (2)
- Growth (14)
- ICYMI (2)
- Interviews with Experimenters (167)
- Management (1)
- Marketing (11)
- Opinion (8)
- Podcasts with Experimenters (15)
- Point/Counterpoint (1)
- Product Experimentation (5)
- Product Management (9)
- Profile (9)
- Question of the week (5)
- Sponsored by Experiment Nation (1)
- The Craft of Experimentation (1)
- The Cultures of Experimentation (1)
- The weekly buzz (13)
- The year in review (1)
- Uncategorized (352)
- Weekly Newsletter (183)
- Where I Started (2)
- Making the case for data-backed design featuring Brian Massey - September 30, 2023
- Announcing the 2023 Conference Awards - September 24, 2023
- How to Maximize Hypotheses Testing ft. Eduardo Marconi Pinheiro Lima - September 9, 2023