The Experiment Nation / CRO Tool 2022 Documentation Report

Executive Summary

Experiment Nation partnered with CRO Tool and asked the CRO/Experimentation community how they documented their Experiments.

In general, the majority of Experimenters use Cloud-based storage to house their documentation, with very few using Experimentation-specific tools. Generally, most (>80%) are storing:

  • Title
  • Hypothesis
  • Results
  • Insights
  • Screenshots
  • Primary and secondary metrics

With that said, most are dissatisfied with the experience of finding older tests (>1 year old) where the few that were satisfied were using their toolโ€™s search functionality.

About the respondents

Earlier this year, the team posted a survey on ExperimentNation.com (including its social profiles and newsletter) and received 48 respondents. They self-reported their occupation, broken down below: 

We then grouped the respondents into 2 categories: Company and Agency as follows:

Where are Experimenters storing their learnings?

We asked where Experimenters store their Experiment documentation. This is how you responded:

We then grouped the reponses by category as follows:

  • General cloud storage
    E.g., Shared Drive, Google Drive, Local
  • Project management tools
    E.g., Confluence, Trello, Asana, Shortcut, Monday, Pipefy
  • General purpose databases
    E.g., Airtable, Notion
  • Purpose-built Tools
    E.g., ClickUp. Effective Experiments, Iridion

The Company category stored their documents as follows:

The Agency category stored their documents as follows:

Takeaway: The majority of respondents, regardless of category, use a generic cloud storage solution like Google Drive to store Experiment documentation; While only 8% use purpose-built tools like ClickUp.

What information are Experimenters documenting?

The most common information recorded are (above 80%):

  • Title
  • Hypothesis
  • Results
  • Insights
  • Screenshots
  • Primary and secondary metrics
Takeaway: Most respondents donโ€™t document the technical setup details of their tests. Furthermore, a sizeable chunk (25%) are either not documenting their target audiences/pages or are not targeting specific audiences/pages.

Unsurprisingly, we see both Companies and Agencies following the similar breakdowns with agencies either targeting their test towards specific audience more or doing a better job at documenting them.

Companies:

Agencies:

How are Experimenters finding old (over a year) results?

This was an open-ended question, but we grouped the results into general approaches. Furthermore, weโ€™ve indicated whether there was a general negative sentiment towards each approach.

Approach% of respondentsSentiment
Folders (cloud)26%๐Ÿ‘Ž
Search in tool24%๐Ÿ‘
Archive list (by year)21%๐Ÿ‘Ž
Filtering16%๐Ÿ‘
Queries13%๐Ÿ‘
Airtable archive11%๐Ÿ‘Ž
Roadmap5%๐Ÿ‘Ž
Tags5%๐Ÿ‘
Slides3%๐Ÿ‘Ž
Dashboard3%๐Ÿ‘Ž
Database3%๐Ÿ‘Ž
Takeaway: The majority of Experimenters are not pleased with how they have to retrieve old documentation and Search seems to be the most common satisfactory method.

Some interesting comments

  1. โ€œWe have a list of all conducted winner/losers which can be filtered by a bunch of parameter like winner/loser, page, uplift, testcategory, owner etc.โ€
  2. โ€œRoadmap/Trello : store information linked to the brief for further iteration-duplication / Slides : store information linked to the story (insights/experiment/results/decision) / Github : store the code usedโ€
  3. โ€œOur AB tests are developed by an [agency]. When we will do it by ourselves we will save also the codeโ€
  4. โ€œWould love to have templates option for white labelingโ€
  5. โ€œItโ€™s a hot mess everywhere Iโ€™ve worked. Especially in combination with the usability tests and qualitative consumer studies.โ€
  6. โ€œWe’ve tried a bunch of different approaches with none of them working well for the different audiences. We’ve hacked together a solution that is workable for now but very limiting when people outside of the experimentation team need or want to find something.โ€
  7. โ€œOur process isn’t perfect and relies on a mixture of intrinsic knowledge and JIRA know-howโ€
  8. โ€œCollecting the meta data of our entire program, and over all our clients help identify internal opps to improve our process.โ€