Executive Summary
Experiment Nation partnered with CRO Tool and asked the CRO/Experimentation community how they documented their Experiments.
In general, the majority of Experimenters use Cloud-based storage to house their documentation, with very few using Experimentation-specific tools. Generally, most (>80%) are storing:
- Title
- Hypothesis
- Results
- Insights
- Screenshots
- Primary and secondary metrics
With that said, most are dissatisfied with the experience of finding older tests (>1 year old) where the few that were satisfied were using their tool’s search functionality.
About the respondents
Earlier this year, the team posted a survey on ExperimentNation.com (including its social profiles and newsletter) and received 48 respondents. They self-reported their occupation, broken down below:
We then grouped the respondents into 2 categories: Company and Agency as follows:
Where are Experimenters storing their learnings?
We asked where Experimenters store their Experiment documentation. This is how you responded:
We then grouped the reponses by category as follows:
- General cloud storage
E.g., Shared Drive, Google Drive, Local - Project management tools
E.g., Confluence, Trello, Asana, Shortcut, Monday, Pipefy - General purpose databases
E.g., Airtable, Notion - Purpose-built Tools
E.g., ClickUp. Effective Experiments, Iridion

The Company category stored their documents as follows:

The Agency category stored their documents as follows:

Takeaway: Approximately half of the respondents, regardless of category, use a generic cloud storage solution like Google Drive to store Experiment documentation; While only 12-19% use purpose-built tools like ClickUp. |
What information are Experimenters documenting?
The most common information recorded are (above 80%):
- Title
- Hypothesis
- Results
- Insights
- Screenshots
- Primary and secondary metrics
Takeaway: Most respondents don’t document the technical setup details of their tests. Furthermore, a sizeable chunk (25%) are either not documenting their target audiences/pages or are not targeting specific audiences/pages. |
Unsurprisingly, we see both Companies and Agencies following the similar breakdowns with agencies either targeting their test towards specific audiences more or doing a better job at documenting them.
Companies:
Agencies:
How are Experimenters finding results that are over a year old?
This was an open-ended question, but we grouped the results into general approaches. Furthermore, we’ve indicated whether there was a general negative sentiment towards each approach.
Approach | % of respondents | Sentiment |
Folders (cloud) | 26% | ? |
Search in tool | 24% | ? |
Archive list (by year) | 21% | ? |
Filtering | 16% | ? |
Queries | 13% | ? |
Airtable archive | 11% | ? |
Roadmap | 5% | ? |
Tags | 5% | ? |
Slides | 3% | ? |
Dashboard | 3% | ? |
Database | 3% | ? |
Takeaway: The majority of Experimenters are not pleased with how they have to retrieve old documentation and Search seems to be the most common satisfactory method. |
Some interesting comments
- “We have a list of all conducted winner/losers which can be filtered by a bunch of parameter like winner/loser, page, uplift, testcategory, owner etc.”
- “Roadmap/Trello : store information linked to the brief for further iteration-duplication / Slides : store information linked to the story (insights/experiment/results/decision) / Github : store the code used”
- “Our AB tests are developed by an [agency]. When we will do it by ourselves we will save also the code”
- “Would love to have templates option for white labeling”
- “It’s a hot mess everywhere I’ve worked. Especially in combination with the usability tests and qualitative consumer studies.”
- “We’ve tried a bunch of different approaches with none of them working well for the different audiences. We’ve hacked together a solution that is workable for now but very limiting when people outside of the experimentation team need or want to find something.”
- “Our process isn’t perfect and relies on a mixture of intrinsic knowledge and JIRA know-how”
- “Collecting the meta data of our entire program, and over all our clients help identify internal opps to improve our process.”