Over a decade ago can you imagine?!, I completed my master’s in Human Computer Interaction and media studies. I learned a lot about how the human brain processes information. One learning has stuck with me since then: the brain’s capacity to process information is biologically limited.1
So, we’ve made incredible technological progress: We’ve exponentially increased our access to information. But, our mental bandwidth hasn’t evolved much in roughly 300,000 years. Homo sapiens (us), emerged with the brain processing capacity we know today during that time.
Our working memory…
… which is responsible for holding and processing information in real time—has a well-documented capacity. Psychologist George A. Miller’s famous paper, “The Magical Number Seven, Plus or Minus Two” (1956), highlighted that most people can hold about 5 to 9 items in working memory at once. This hasn’t changed despite the technological advances we’ve seen. So, while the brain’s neuroplasticity allows us to adapt to new tools and environments, our core cognitive architecture—our “mental bandwidth”—has remained largely consistent since our species arose.
This limitation has serious implications that we should remind ourselves of—just think about the myth of multitasking! Or the effects of social media on our attention span. In today’s era of “information explosion,” data is everywhere. However, our attention and cognitive resources are finite. Making informed decisions requires more than just intuition. I like to call this intuition “internal wisdom” (knowledge derived from within an organisation). To truly innovate and succeed, we need a balance. We should combine it with “external wisdom.” This involves the perspectives, experiences, and feedback from the actual users of our digital products and services.
But this raises a challenge: how do we manage the massive amount of data, opinions, and insights under the constant pressure of time? Let’s face it, no one (especially not low-maturity stakeholders) waits for research studies to be perfectly finished before making decisions. How can we use tools like UX Research Repositories to turn this challenge into an advantage?
ROI of Research Repositories
A UX Research Repository is more than just a tool for storing data. It’s a system for collecting, analysing, and sharing insights in a standardised way. When done right, it becomes a hub for both internal and external wisdom—making information accessible and actionable across teams, while saving money. But how?
Recycle instead of Re-do
One exciting advantage we’ve recently discovered is cost savings through “recycling” insights. We use AI features in the repository to recognise patterns across projects. This eliminates the need to search manually for specific tags or fields. We call this “fast discovery.” E.g. we’ve conducted multiple studies on how customers manage grocery lists. The studies were conducted in different digital products or at different times. By asking the repository, “What do we have about lists?” we can now get a concise summary spanning multiple projects. Since this AI-generated summary references the raw data, we can fact-check it to ensure its validity. This helps to mitigate risks like AI hallucinations.
Lessons I’ve Learned in Research Ops
Building and maintaining a research repository isn’t always a breeze, and I’ve had my share of bumps along the way. Here are some of the key things I’ve learned:
- Balancing Complexity and Usability:
A repository has to be easy to use—not just for UX researchers but also for stakeholders who might not have a research background. If a tool is too complicated, people simply won’t use it (or they’ll complain—a lot!). - Upskilling Through Engagement with the Repository
One of the most profound yet often overlooked effects of implementing a UX research repository is how it upskills those who engage with it. UX designers and product managers who actively contribute—whether by analysing user interviews or synthesising CSAT reports—develop a deeper understanding of qualitative content analysis. Even more impactful is how working with raw data fosters immediate and genuine empathy for our users. By working directly with user data, these colleagues evolve from creators of prototypes into advanced contributors to UX research. This ripple effect is transformative, and it underscores the repository’s value far beyond data storage. - Maintaining Order and How a Dash of OCD Helps 😉
Keeping a repository “tidy” requires ongoing effort. In our case, different roles contributed to the repository. Participants included UX Designers, UX Researchers, and Product Managers. This collaboration led to diverse tagging needs. Some teams created product-specific tagging boards, while we maintained a workspace-wide tagging structure for broader searches, like filtering for “positive” or “negative” emotions. To prevent the system from becoming overly complex, we regularly aligned on which tags to include in the overarching structure. Ironically, a light-hearted sense of OCD helps—ensuring tagged data aligns with a unified system requires a knack for order 😅. - Providing Training and Support
Proper training is critical. Tools like research repositories come with a learning curve. Not everyone contributing shares the same enthusiasm for digging into interviews. A visible support person or team who guide contributors makes all the difference. So, please think of this when you introduce such tool to a wider audience. - Offering Useful Templates
Templates help ease contributions and standardise reporting. For example, we implemented a three-layer structure for insights:- Granular Level: Detailed insights, like CSAT responses shared within a team.
- Project Level: Concise reports summarising one research project, useful for stakeholders.
- Quarterly Level: Summaries of multiple projects for broader updates.
- The Value of Metadata
Metadata (or “fields” in some repositories) adds crucial context. Marking insights with target groups, like “HoReCa” for Hotel, Restaurant, Catering users in B2B contexts, allows to filter insights. It also helps in prioritising metrics like “business relevance” or “severity” effectively. Agreeing on standard metadata fields with your team is key for consistency. Here, you need to make a healthy decision. Decide on ‘how far’ you want to drive the fielding: is it raw data fielding, or just insights fielding? It’s your choice. - Communicating Insights Beyond the Repository
Let’s face it: Insights don’t promote themselves. Post links on platforms like Yammer or Teams where your stakeholders communicate, e.g. at the end of a QBR. Mark “golden nuggets” within the repository for visibility. These actions ensure findings reach the right people. Some repositories even allow marked insights to appear on their homepage or in feeds for easier discovery. - Focusing on the “Why” or “So what?”
It’s not enough to have data; we need to clearly communicate why certain insights matter. Framing insights around their impact makes them more persuasive and actionable (see my article on ROI of User Research). - Outsourcing vs. Building In-House Repository
While buying an external repository solution may feel risky due to vendor lock-in effects, building custom repository systems is often impractical. It requires significant resources and time. Development, maintenance, and enhancements are expensive. Training an AI model on your data is feasible only for high maturity organisations that are heavily invested in UX.
What’s Next for Democratising Research?
The goal of democratising research isn’t just to make insights available to everyone but to create a culture where decisions are informed by a mix of internal and external wisdom. Research insights shouldn’t be a “nice to have”; they should be essential drivers of business success.
Fostering Skill Development and Empathy at Scale
As organisations democratise research, we must not overlook the profound impact it creates. Providing a research repository doesn’t just make insights accessible—it empowers contributors to deepen their expertise in UX research. Imagine a future where product managers, designers, or even business stakeholders routinely conduct qualitative analyses because it becomes so easy by using a repository, and empathise deeply with users because they’ve worked directly with raw data. This not only elevates the quality of research but also embeds user-centric thinking across the organisation.
Bridging the Gap Between Data and Action
By streamlining how we collect and share research insights, we can bridge the gap between data overflow and human limitations. Tools like research repositories are just the beginning. However, I’m not blind to their evolving nature. Maintaining such a system requires ongoing effort and adaptability. But the rewards—better decisions, cost savings, and deeper understanding—are worth the effort.
Disclaimer
The text in this article was improved with help of WordPress AI and ChatGPT.
Footnotes
- Cowan, N. (2001). “The magical number 4 in short-term memory: A reconsideration of mental storage capacity.” Behavioral and Brain Sciences, 24(1), 87–114.
Cowan revisits Miller’s findings and proposes that the effective capacity of working memory is closer to four chunks of information, providing an updated understanding of cognitive limits.
Broadbent, D. E. (1958). Perception and Communication.
Broadbent introduced the concept of a “bottleneck” in attention, arguing that we can only process a limited amount of information at any given time.
Sweller, J. (1988). “Cognitive load during problem solving: Effects on learning.” Cognitive Science, 12(2), 257–285.
This paper explores how cognitive load theory links working memory limitations to learning, providing insight into how our brains handle complex tasks.
Just, M. A., & Carpenter, P. A. (1992). “A capacity theory of comprehension: Individual differences in working memory.” Psychological Review, 99(1), 122–149.
This theory ties working memory limitations to individual differences in the neural resources available for cognitive tasks.
Baddeley, A. D. (1992). “Working memory.” Science, 255(5044), 556–559.
Alan Baddeley’s model of working memory elaborates on the brain’s processing limitations, identifying components like the phonological loop and the visuospatial sketchpad.
Kahneman, D. (1973). Attention and Effort.
Kahneman discusses how attention is a limited resource, which is central to understanding decision-making and the brain’s capacity to allocate resources efficiently.
Simon, H. A. (1955). “A behavioral model of rational choice.” The Quarterly Journal of Economics, 69(1), 99–118.
Herbert Simon introduced the idea of “bounded rationality,” where decision-making is constrained by cognitive limitations.
Carrasco, M. (2011). “Visual attention: The past 25 years.” Vision Research, 51(13), 1484–1525.
This paper ties cognitive limitations to the biological constraints of sensory and attentional systems.
Koch, C., & Ullman, S. (1985). “Shifts in selective visual attention: Towards the underlying neural circuitry.” Human Neurobiology, 4(4), 219–227.
It explains the brain’s prioritization of information processing, demonstrating how attentional shifts are managed by neural circuits.
↩︎
