I've never worked in a session like this before and it was very enriching. I was surprised by the variety of people who came and to see how each group, one after the other, was building a shared consensus, because there was a lot of consensus!
On October 3, CREAF held its first participatory session to discuss how to apply the CoARA (Coalition for Advancing Research Assessment) principles in the evaluation of the center's research. The objective was to align institutional policies for the selection, promotion and evaluation of research staff with the CoARA principles . The session brought together around thirty people from different professional categories, including researchers, technicians and managers, who worked in three groups led by members of the CREAF CoARA working group. The group was formed with the mission of bringing these principles to our own reality and is made up of the head of Academic Talent and Gender, Teresa Rosas, the head of Research Impact, Anabel Sánchez, the head of Open Science, Florencia Florido and three people who do research, Maurizio Mencuccini, Estela Romero and Jordi Bosch.
During the session, the three groups discussed three specific institutional policies in a World Café style where each group consecutively added contributions to the previous discussion. The discussions exemplify the elements we are reviewing within the CoARA framework:
The day was highly appreciated by the participants, who highlighted the importance of sharing visions and collectively building this new way of understanding evaluation. Among the most repeated ideas were the need to value the diversity of research results beyond published articles, the importance of transparency in evaluation processes and the realization that more time is needed to make processes fairer.
YOU MAY ALSO BE INTERESTED IN
The conference is just the first step in a participatory process that will continue over the coming months and responds to the action plan that began in 2023 with CREAF's accession to CoARA . The intention is to continue opening spaces to collect proposals and generate consensus. This path requires time, active listening and a critical look at traditional evaluation mechanisms, but it also represents an opportunity to build a fairer model aligned with the values of the center.