On February 27 and 28, 2023, the CGIAR’s Independent Advisory and Evaluation Service (IAES) held a workshop in Rome, Italy about its new set of evaluation guidelines. These build on the CGIAR Independent Science for Development Council (ISDC)’s Quality of Research for Development (QoR4D) Frame of Reference, and provide the framing, criteria, dimensions, and methods for assessing QoR4D – both within CGIAR, and in other like-minded organizations. The hybrid online and in-person event was designed to help practitioners across and beyond the CGIAR system to understand and apply the new guidelines in their own evaluative contexts.
We spoke to evaluation consultants Emma Rotondo and Rodrigo Paz Ybarnegaray to find out more about their experiences of the workshop and aspirations in this arena going forward. Rotondo is an international evaluation consultant based in Peru. She is the president of the Peruvian Evaluation Network, and the founder of the Latin American and Caribbean Network of Monitoring, Evaluation and Systematisation (ReLAC). Paz Ybarnegaray, meanwhile, is an international evaluation specialist based in Bolivia. He is currently Evidence and Impact Manager for the Cadasta Foundation, but previously worked as an independent evaluation consultant – including for several CGIAR centers.
Q: How familiar were you with the CGIAR, the ISDC and the Evaluation Function prior to the workshop?
Rotondo: I led the evaluation area of the Andean Change Platform project for [CGIAR] International Potato Center for four years, from 2007 to 2011. There, we did a lot of impact evaluations across four Andean countries – Peru, Bolivia, Ecuador, and Colombia – and produced an inventory of participatory methods for innovation.
Paz Ybarnegaray: I’ve worked with several CGIAR centers – CIMMYT, CIAT, IFRI, and World Fish – as an evaluation consultant, and a couple of times with World Fish as an evaluation scientist, and some of the projects I was part of at those centers were evaluated by the ISDC.
Q: Were your expectations from the workshop met? Did you have any unanticipated learnings?
Rotondo: For me, it was a very important meeting because I reconnected with the topic of research evaluation. I was very interested in learning about how evaluation can feed back into the planning and design of future interventions. I think that this evaluation guide is very powerful: I consider it very up-to-date and relevant.
Paz Ybarnegaray: What I learned about the work is that it’s not new, in terms of trying to build frameworks, processes, guidelines, methods, tools, etc. to help the CGIAR centers evaluate the quality of research. But what is new is that this is, as far as I know, the first time they are making a collective, systematic, and comprehensive effort to do it, in which all the centers are involved as part of OneCGIAR. And it was good to hear that the team managed to take the process right through to the production of the guidelines.
The way they did it was also very interesting because they did it in a participatory way, consulting different people at different levels and involving evaluation professionals in the process, having well-known advisors providing feedback on the process, etc., and producing something that looks like a very good tool with a lot of potential within the CGIAR landscape.
What they produced is a very systematic, comprehensive, and interesting way of looking at how to evaluate research and the production of science – but at the same time, it's a very practical tool. Sometimes you can produce something very complex and difficult to put into practice, but I found these guidelines particularly practical, and that's key to being able to be fully implemented within the CGIAR system.
Q: Was there anything you would have liked to see more of?
Rotondo: There are three issues that I think are very important to develop further in the future. The first is tools for evaluation in fragile and changing contexts – how to improvise in these kinds of situations. The second is tools for context analysis: it's a very complex thing to do, and very important in an evaluation, and it needs to include social, community, and organizational elements. The third is to consider the ‘soft skills’ or interpersonal competencies of the evaluators.
Paz Ybarnegaray: The only thing – which happens all the time and is not the fault of the organization – is the lack of time, because the conversations were excellent, and many times we had to end sessions with all of us still wanting to know and discuss more.
Q: What did you find the most exciting? (eg. sessions, people, case studies)
Rotondo: The selection of people who were there. The participants came from different regions and had all kinds of different relationships to research and evaluation. It was a challenge to bridge these different communities of research and evaluation – it's not easy – but it was very interesting: this kind of dialogue opens up other perspectives, assumptions, interests, and approaches.
Paz Ybarnegaray: The people were very interesting: I got to be with old colleagues and friends, and I also made new connections with other people that are interested in this area of work. The sessions and exercises were also interesting and exciting – it was a good way to get into the details of the guidelines and how to approach an evaluation. In general, it is very exciting to see such a big effort come to fruition, and that it is going to be implemented and hopefully produce useful information to help guide the centers in how they do research and guide processes at different levels of science production.
Q: Is there a particular way in which you would like to continue engaging with IAES/Evaluation and CGIAR on this topic, and around the Guidelines? If so, what?
Rotondo: Yes, in two ways. First, as a leader of a professional evaluation network and a member of a regional network of evaluators, I believe that the topic of research evaluation is not sufficiently positioned within the community of evaluators. So, it's very important to create spaces for debate, exchange of information, and elaboration of proposals.
The second is that I am interested and experienced in issues related to gender, social inclusion, poverty reduction, innovation, resilience, and especially programs in highly vulnerable situations and contexts. I believe that the guide gives importance to these issues – and that I can contribute more to these arenas, too.
Paz Ybarnegaray: This kind of evaluation fits in quite a narrow space – it pertains to universities, research centers around the world, and of course, the CGIAR’s own research. Right now, I'm working mostly in development and not so much in research, so there’s not much direct application of these kinds of tools. But if I’m part of an evaluation in the future that is looking specifically at quality of research, then I think these guidelines will be very useful.