WITH THE CO-AUTHORITY OF:
WITH THE CO-AUTHORITY OF:
In recent posts in this section, we have attempted to explain what research impact is, how we can connect science to society more effectively, and what we value when we talk about impact. This is all part of a scientific system seeking new ways to recognise and evaluate research more holistically, beyond traditional scientific production.
At CREAF, we often ask ourselves: how can we know if what we do has an impact? What is its relevance? As with many things in research, the answer is 'it depends'. Impact is rarely the result of a single project or team. Rather, it is often the result of a network of contributions and interactions involving researchers, technicians, managers, communities, administrations, NGOs and many other individuals who invest their time, knowledge, energy and passion (as discussed in this article).
At CREAF we often ask ourselves: how can we know if what we do has an impact? What is its relevance? The answer, like many things in research, is “it depends.”
It is essential to value everything we invest our time and energy in, both within and beyond the academic sphere. This involves recognising that activities such as setting up a working group to improve the management of a nature park or collaborating with local stakeholders on initiatives are also part of the value that researchers bring. These actions complement scientific research and transform knowledge into real-world impact.
However, these processes often include elements that are beyond our control, and it is impossible to predict whether they will generate tangible or immediate results. Despite this, learning from our experiences, understanding what has happened and adjusting our approach enables us to discover new paths to knowledge and impact, both planned and unexpected.
That is why, today, we want to discuss how and why to collect these processes, and which mechanisms can help us to monitor them. We also want to reflect on the diversity of quantitative and qualitative evidence that can demonstrate the research impact beyond academia.
YOU MAY ALSO BE INTERESTED IN
YOU MAY ALSO BE INTERESTED IN
Contribution vs attribution: impact as a shared effort
Contribution vs attribution: impact as a shared effort
When change occurs - a new environmental policy, a more sustainable management practice or a better understanding of biodiversity, for example - it is tempting to want to know "who made it possible." However, in reality, impacts are collective and the result of teamwork. What we can do is recognise our contribution to a broader process, as one piece of the puzzle that helps make change possible. In a multi-stakeholder environment, it is important to be honest about what we have really contributed.
This encourages us to abandon a linear perspective and understand impact as a shared journey, where various initiatives influence, interact with and inspire each other. At CREAF, we embrace this perspective, showcasing our work within the academic field as well as our efforts beyond the boundaries of research.
Therefore, we are talking about contribution rather than attribution: identifying how our research has been a key element of broader change. This does not reduce the merit or success of the project or research group but rather places its impact in a real collective and shared context. At the same time, it gives credibility to the work done by highlighting the part of the process that we have helped to make possible.
When change occurs, it is tempting to want to know “who made it possible.” But, in reality, the impacts are collective, the result of group work.
Measuring what is collective: combining quantitative and qualitative evidence
Measuring what is collective: combining quantitative and qualitative evidence
If there are multiple paths, there must also be multiple tools to track them. Therefore, assessing the impact and all the activities and interactions required to achieve it (the impact pathway) involves combining quantitative and qualitative evidence.
Inspired by Matter of Focus's Contribution Framework, we can map contributions on the path to impact: what activities we have carried out, what results they have generated, who has used them, and what kind of change—big or small—has occurred as a result. Rather than looking for a direct cause-and-effect relationship, we should focus on understanding how we have contributed to making that change possible.
Quantitative evidence helps us to measure the contribution of research in a more interpretable way, but it often requires context. Rather than describing all the benefits generated, it quantifies specific aspects of the path towards impact. For example, we can collect the number of people who have attended forest management training courses and complement this with the number of hectares of forest where the new techniques acquired have been implemented and with what results (and add qualitative evidence!).
Qualitative evidence, which is equally essential and often synergistic, can include testimonials from stakeholders, changes in perception, lessons learned, new management dynamics, or decisions observed through interviews and surveys. In this sense, a well-documented report can better explain the impact than a simple table of data, and although both dimensions complement and enrich each other, sometimes they may be the only way to explain and demonstrate the benefits achieved. In the previous example, we could add interviews or surveys that provide information on what benefits forest managers have been able to detect, and how this has influenced their perception of research or decision-making in forest management.
Doing this monitoring and reflection is not only a way of being accountable, it is also a way of learning and giving value to the collective experience.
The value of monitoring what we do (even if we don't know what will work)
The value of monitoring what we do (even if we don't know what will work)
Monitoring is not just an administrative exercise. Collecting and storing information about what we do—collaborations, meetings, decisions, small or partial results—allows us to look back and understand what really happened, what we learned, and how we can improve.
In reality, there are many elements that cannot be controlled in the process of generating impact: we do not always know what will work, or which paths will lead to a specific result. However, this constant recording - this attention to the process - becomes a source of knowledge that allows us to reflect on activities, assess whether what we had planned works, and discover new paths, often unexpected by us.
This monitoring and reflection is not only a way of being accountability, but also of learning and adding value to the collective experience. It should be remembered that impact monitoring extends throughout the entire research project cycle (and beyond): it begins within the preparation of proposals, continues during execution - with the monitoring and evaluation of progress - and extends to the final justification, where it is necessary to demonstrate how the objectives have been achieved, and what results have translated into concrete benefits within the framework of the project.
CREAF visit to the forest management group in Montesquiu. Image: Galdric Mossoll
Value what we do together
Value what we do together
Ultimately, talking about contribution and impact measurement means valuing the time and energy we invest in what we do, both inside and outside academia. Every meeting with local managers, every activity with a school, every piece of shared data, and every inspiring conversation is part of a collective journey towards impact.
As more information, reflection, and learning we accumulate, the more value we give to what we do. And perhaps we will not always know exactly "what has worked," but we will know that this whole process has helped us grow, improve, and open new paths—some planned and others we have not yet imagined.
For this reason, we propose rethinking how we measure impact, giving more emphasis to the process and actions taken to achieve it. We believe that evaluation should not be just a final exercise, but an opportunity for learning and continuous improvement. Understanding the monitoring and collection of evidence as a living process allows us to move towards a more open, collaborative, and socially relevant science.
This approach also prepares us better for future proposals, projects or evaluations, and turns monitoring into a tool for improving what we do and bringing us closer to the ultimate goal: contributing to the advancement of knowledge and the well-being of society and the environment.
To know more
To know more
- Article: Progressing research impact assessment: A 'contributions' approach ; Sarah Morton
- Evaluating Societal Impact (2024). Impact indicators - overview and selection menu , Erasmus University Rotterdam (EUR).
- Collecting Research Impact Evidence: Best Practice Guidance for the Research Community , Vertigo Ventures and Digital Science | June 2016