The Experiment Nation / CRO Tool 2022 Documentation Report

Executive Summary

Experiment Nation partnered with CRO Tool and asked the CRO/Experimentation community how they documented their Experiments.

In general, the majority of Experimenters use Cloud-based storage to house their documentation, with very few using Experimentation-specific tools. Generally, most (>80%) are storing:

  • Title
  • Hypothesis
  • Results
  • Insights
  • Screenshots
  • Primary and secondary metrics

With that said, most are dissatisfied with the experience of finding older tests (>1 year old) where the few that were satisfied were using their tool’s search functionality.

About the respondents

Earlier this year, the team posted a survey on ExperimentNation.com (including its social profiles and newsletter) and received 48 respondents. They self-reported their occupation, broken down below: 

We then grouped the respondents into 2 categories: Company and Agency as follows:

Where are Experimenters storing their learnings?

We asked where Experimenters store their Experiment documentation. This is how you responded:

We then grouped the reponses by category as follows:

  • General cloud storage
    E.g., Shared Drive, Google Drive, Local
  • Project management tools
    E.g., Confluence, Trello, Asana, Shortcut, Monday, Pipefy
  • General purpose databases
    E.g., Airtable, Notion
  • Purpose-built Tools
    E.g., ClickUp. Effective Experiments, Iridion

The Company category stored their documents as follows:

The Agency category stored their documents as follows:

Takeaway: Approximately half of the respondents, regardless of category, use a generic cloud storage solution like Google Drive to store Experiment documentation; While only 12-19% use purpose-built tools like ClickUp.

What information are Experimenters documenting?

The most common information recorded are (above 80%):

  • Title
  • Hypothesis
  • Results
  • Insights
  • Screenshots
  • Primary and secondary metrics
Takeaway: Most respondents don’t document the technical setup details of their tests. Furthermore, a sizeable chunk (25%) are either not documenting their target audiences/pages or are not targeting specific audiences/pages.

Unsurprisingly, we see both Companies and Agencies following the similar breakdowns with agencies either targeting their test towards specific audiences more or doing a better job at documenting them.

Companies:

Agencies:

How are Experimenters finding results that are over a year old?

This was an open-ended question, but we grouped the results into general approaches. Furthermore, we’ve indicated whether there was a general negative sentiment towards each approach.

Approach% of respondentsSentiment
Folders (cloud)26%?
Search in tool24%?
Archive list (by year)21%?
Filtering16%?
Queries13%?
Airtable archive11%?
Roadmap5%?
Tags5%?
Slides3%?
Dashboard3%?
Database3%?
Takeaway: The majority of Experimenters are not pleased with how they have to retrieve old documentation and Search seems to be the most common satisfactory method.

Some interesting comments

  1. We have a list of all conducted winner/losers which can be filtered by a bunch of parameter like winner/loser, page, uplift, testcategory, owner etc.”
  2. “Roadmap/Trello : store information linked to the brief for further iteration-duplication / Slides : store information linked to the story (insights/experiment/results/decision) / Github : store the code used”
  3. “Our AB tests are developed by an [agency]. When we will do it by ourselves we will save also the code”
  4. “Would love to have templates option for white labeling
  5. It’s a hot mess everywhere I’ve worked. Especially in combination with the usability tests and qualitative consumer studies.”
  6. “We’ve tried a bunch of different approaches with none of them working well for the different audiences. We’ve hacked together a solution that is workable for now but very limiting when people outside of the experimentation team need or want to find something.”
  7. “Our process isn’t perfect and relies on a mixture of intrinsic knowledge and JIRA know-how”
  8. Collecting the meta data of our entire program, and over all our clients help identify internal opps to improve our process.”