Skip to main content

Main menu

  • Home
  • About
  • Who we are
  • News
  • Events
  • Publications
  • Search

Secondary Menu

  • Independent Science for Development CouncilISDC
    • Who we are
    • News
    • Events
    • Publications
    • Featured Projects
      • Inclusive Innovation
        • Agricultural Systems Special Issue
      • Proposal Reviews
        • 2025-30 Portfolio
        • Reform Advice
      • Foresight & Trade-Offs
        • Megatrends
      • QoR4D
      • Comparative Advantage
  • Standing Panel on Impact AssessmentSPIA
    • About
      • Who We Are
      • Our Mandate
      • Impact Assessment Focal Points
      • SPIA Affiliates Network
    • Our Work
      • Country Studies
        • Community of Practice
        • Bangladesh Study
        • Ethiopia Study
        • Uganda Study
        • Vietnam Study
      • Causal Impact Assessment
        • Call for Expressions of Interest: Accountability and Learning Impact Studies
      • Use of Evidence
      • Cross-Cutting Areas
        • Capacity Strengthening
        • Methods and Measurement
        • Guidance to IDTs
    • Resources
      • Publications
      • Blog Series on Qualitative Methods for Impact Assessment
      • SPIA-emLab Agricultural Interventions Database
    • Activities
      • News
      • Events
      • Webinars
  • Evaluation
    • Who we are
    • News
    • Events
    • Publications
    • Evaluations
      • Science Group Evaluations
      • Platform Evaluations
        • CGIAR Genebank Platform Evaluation
        • CGIAR GENDER Platform Evaluation
        • CGIAR Excellence in Breeding Platform
        • CGIAR Platform for Big Data in Agriculture
    • Framework and Policy
      • Evaluation Method Notes Resource Hub
      • Management Engagement and Response Resource Hub
      • Evaluating Quality of Science for Sustainable Development
      • Evaluability Assessments – Enhancing Pathway to Impact
      • Evaluation Guidelines
  • Independent Science for Development CouncilISDC
  • Standing Panel on Impact AssessmentSPIA
  • Evaluation
Back to IAES Main Menu

Secondary Menu

  • Who we are
  • News
  • Events
  • Publications
  • Evaluations
    • Science Group Evaluations
    • Platform Evaluations
      • CGIAR Genebank Platform Evaluation
      • CGIAR GENDER Platform Evaluation
      • CGIAR Excellence in Breeding Platform
      • CGIAR Platform for Big Data in Agriculture
  • Framework and Policy
    • Evaluation Method Notes Resource Hub
    • Management Engagement and Response Resource Hub
    • Evaluating Quality of Science for Sustainable Development
    • Evaluability Assessments – Enhancing Pathway to Impact
    • Evaluation Guidelines
Blog

A Q+A with Evaluation Experts Amy Jersild and John Gargani: Learning from the IDEAS 2025 Multi-Dimensional Evaluation Conference, Italy

You are here

  • Home
  • Evaluation
  • News
  • A Q+A with Evaluation Experts Amy Jersild and John Gargani: Learning from the IDEAS 2025 Multi-Dimensional Evaluation Conference, Italy

From 4-6 March 2025, evaluation specialists and practitioners from across the globe gathered for 2025 joint evaluation conference: International Development Evaluation Association (IDEAS)/New Development Bank (NDB) Independent Evaluation Office (IEO) Conference, at the headquarters of the Food and Agriculture Organization of the United Nations (FAO) in Rome, Italy.

Through 36 selected panels and roundtables, nine professional development workshops, and contributions from leading global experts, the program explored the topic of ‘Evaluation for transformational change’ across key themes, including: Sustainable Development Goals (SDG) evaluation and its role in policy transformation; climate and crisis evaluation for a more sustainable future; equity, gender, and human rights in evaluation; strengthening evaluation capacity worldwide; and the role of evaluation in emerging markets and the BRICS countries (Brazil, Russia, India, China, South Africa, Egypt, Ethiopia, Indonesia, Iran and the United Arab Emirates). 

In the wake of the conference, we spoke with evaluation experts and collaborators with evaluation function of CGIAR: Amy Jersild and John Gargani to hear their take on its most important takeaways, and directions for the evaluation landscape in CGIAR and beyond going forward.

 

Q: How have you engaged with Monitoring, Evaluation, Learning and Impact Assessment (MELIA) in CGIAR? 

Amy: I have been working with the evaluation function under CGIAR’s Independent Advisory and Evaluation Service (IAES) for over four years. I began with peer reviewing a proposed method to synthesize CGIAR’s evidence produced over a ten-year period [Synthesis] and later was invited to apply a newly developed framework for assessing evaluability of programs [EA guidelines]. I am involved in EA work to this day, and am currently focused on assessing the evaluability of CGIAR’s new 2025-2030 portfolio.

John: I served on the Evaluation Reference Group [ERG] to IAES for two years during the workshop to launch CGIAR’s guidelines on evaluating Quality of Science, and now I consult on matters related to evaluation and scaling. Alongside colleagues, I am currently writing guidance on how to evaluate scaling efforts at CGIAR that are aligned with its evaluation framework and builds on my work on scaling impact.  

 

Q: What have you learned over the course of your engagement?

John: CGIAR is a complex organization working to solve complex problems in complex settings. That makes evaluation at CGIAR challenging and important. Luckily, it is being done by good people. But they are few in number and the need for evaluation is great. I contribute what I can and hope that it aids their efforts.

Amy: CGIAR’s work on evaluability assessments is exciting and important as it is aimed at enabling CGIAR’s programs to prepare for evaluation, and in so doing, better enabling evaluation to support their desired change. It has been an interesting learning process reflecting on our application of the EA framework with CGIAR’s regional integrated-initiatives in 2023 [EA Synthesis], and thereby contributing data toward an eventual meta-evaluation of its effectiveness within CGIAR’s context and its 2025-2030 portfolio. 

 

Q: What are the key workshops/sessions that you led or attended at the IDEAS conference? And what were the main takeaways?

Amy: In my consultancy role with evaluation function under IAES, I co-organized and represented CGIAR in a panel session with colleagues from two UN agencies, FAO and ILO [the International Labour Organization]. We discussed evaluability assessments as a means      to promote transformative change at the program, system, and organization level, and for enabling evaluation itself to be transformative. We presented specific examples of programming and evaluation across each of the agencies, as well as country-level evaluability assessments. We also discussed the systems level within the UN, with the evaluability assessment of the UN Sustainable Development Cooperation Framework (UNSDF). 

Our use of the term “transformative change” and “transformation” reflected the conference theme, which has become a buzzword in the development sector. While several participants expressed appreciation for learning about EAs as a tool to effect change, one takeaway I had was it may be more helpful discussing change in transitional instead of transformative terms. The etymological origin of these words provide interesting contrast: transition as “going across” versus transformation as “change in shape”. How or what that transformation looks like may be more difficult to discern compared to thinking about what the journey toward a desired change in a system, however that may be defined, may look like, particularly as our context is in a state of change as well. 

The scaling workshop by John was also a real highlight and offered a lot of food for thought for CGIAR’s work. We had a lively discussion about the scaling of CGIAR’s programming, and ways to evaluate it that will provide great support to the design and implementation of the upcoming evaluability assessments of the 2025-2030 Portfolio [TORs]. 

John: I presented a full-day workshop entitled ‘Scaling Impact: New Ways to Plan, Manage, and Evaluation Scaling’. I have given this workshop many times around the world, and I find it is easier to help people think differently about scaling than it is for them to believe they have permission to think differently. I also gave a keynote for the launch of EvalForEarth [a new evaluators’ community of practice] supported by CGIAR as well. My takeaway: cooperation among international agencies is alive and well. And, don’t give the last talk before 2     00 participants are allowed to eat!

I also attended the session on evaluability assessment for transformation organized by Evaluation function of IAES with partners. It seems there are many perspectives on what evaluability assessment and transformation mean. That’s to be expected. If you bring a group of evaluators into a room and say, “We want to talk about X”, everyone will have a different perspective on X. That tells you something about how evaluators think. Transformation, to be fair, is a very challenging concept. How can we know that a system is transforming and has transformed? Is it what people really need in a given context? Transforming a system often seems like the right thing to do, but 10 organizations trying to transform the same system at the same time is probably a bad idea. So, transformation takes a great deal of care, coordination, and inclusion to get right.

 

Q:  What does CGIAR offer that stands out in such a setting?

John: CGIAR is able to present fully contextualized examples of its evaluation work, which tends to focus on how tidy scientific innovations are translated into the messy world of practice. This gets at the ‘last mile’ of knowledge translation, something that I believe has great power to improve the world yet receives too little attention elsewhere. Among those I know at CGIAR, there’s an emphasis on and a dedication to realizing the benefits of research. And not just immediate effects like “We grew more corn”, but how that ripples through economic and social systems, how it may result in some desirable and undesirable effects, and how larger issues, like gender equity, are at stake. See a related blog that Svetlana Negroustoueva, and I co-wrote for AEA, highlighting CGIAR’s approach to evaluating Quality of Science [portal].

Amy: Specific to the IDEAS conference, I think CGIAR’s presentation alongside colleagues from FAO and ILO, on evaluability assessments as a concrete tool to support change, was a highlight, ie.: “Here is an evaluative approach, here’s how it works, and here are concrete examples at varying levels and contexts of what it can do.” It offers a wide range of possibilities as a flexible assessment approach, despite the challenge, as John noted, to agree on a common definition! Similarly, John’s work provided a very specific framework for thinking through possibilities of both how to scale and how to evaluate scaling. 

 

Q: How and where should CGIAR’s MELIA community share its work on MELIA?

Amy: Publish! Write articles for publication in peer reviewed journals about CGIAR’s evaluation experience. Write blogs about it to share widely, both within CGIAR and externally. Find ways to promote evaluation, while reflecting on it as both process and outcome. 

John: I believe that CGIAR MELIA can contribute to evaluation broadly by focusing more on last-mile knowledge translation. I would not suggest doing more of it, but rather digging deeply into a limited number of topics and developing them over longer periods of time. I believe this would serve an important need in development, where many (perhaps most) evaluations follow recipes with too little concern for quality and use. By developing approaches, methods, guidance, case studies, etc. over time, I believe that CGIAR can effectively structure the work of evaluators more effectively while promoting its thoughtful application. Sharing this work online is, I believe, important and would require new social media strategies and modalities, i.e. podcast for the European Evaluation Society by the lead of the evaluation function.

 

Q: What concrete steps/mechanisms inspired by IDEAS (or other evaluation conferences) would you recommend to strengthen the role of evaluation and MELIA in CGIAR?

John: With regard to how MELIA supports CGIAR, I would suggest strengthening conditions that promote good evaluation work. For example, timeframes are often short, budgets are slim, and terms of reference can be expansive. These conditions can be addressed directly. More difficult is building deeper expertise among evaluation contractors. CGIAR’s support of the Eval4Earth community of practice goes a long way toward this, and more is possible. Finally, to benefit fully from evaluation, I would suggest generating greater enthusiasm for evaluation among researchers and more cooperation across CGIAR’s multiple evaluation-related groups. IAES has been building these bridges, and will continue to do so.

Amy: Presenting at IDEAS and at other conferences provides the opportunity to share about the exciting evaluation work at CGIAR, and to learn from others. Keeping active in this area, as CGIAR has been doing for the past several years,  i.e. recently at CGIAR’s Science Week in Nairobi, and at annual gLOCAL events [events], support further discourse and growth of the evaluation function within CGIAR. I’d also agree with John’s suggestion, and would add that while finding ways to support good evaluation work, synthesizing the results of those evaluations would help integrate findings across and better refine theories, and improve future interventions. I know CGIAR engages in some of this evaluative work already, including a new summary on MELIA- related evidence [link]. 

Translating materials into multiple languages is also useful. The EA Guidelines in Spanish, for example, has proven to be particularly useful in my own work with CGIAR in engaging more deeply with CGIAR’s centers in Latin America.  

I’d also add that shifting resources to ex-ante and developmental evaluation work may prove to be a more effective use of funds over ex-post or summative evaluative work in certain circumstances. This may be the case for highly complex interventions, where no single evaluation might capture its full complexity and where its effectiveness may depend on multiple shifting factors. Instead of relying on retrospective evaluations, scoping studies that examine the structure or system in a given context (including policies) to better inform program design may be a better use of resources. Another is having an evaluation team “accompany” a program, providing real time support in using data for evaluative thinking and decision-making, engaging stakeholder participation, and tracking shifts in a trajectory toward change. 

 

Q: Any specific examples of 'moving moments'/personal highlights/surprises from the event? Why does this kind of work matter to you?

Amy: The IDEAS conference took place in early March, just six weeks or so into the Trump Administration. The dismantling of USAID, the US government’s cuts to the UN, and its disengagement from the SDGs were all happening. News of other European governments cutting their aid budgets were also a topic of conversation at the conference. I was happy to hear about the BRICS countries in several sessions: the move toward greater sustainability in their development, particularly in the case of China; and their continued interest to engage globally on pertinent cross-border issues like climate adaptation. Given my long experience in China and familiarity with its language and culture, I have long followed its global engagement. Yet several sessions brought me up to date and reminded me of the shifting global power balance today, as we move from, as they say in the political science world, a unipolar world to a multipolar world. 

John: I must admit, the most moving part was reconnecting with colleagues from around the world whom I do not see often enough. I often learn as much from them in casual conversation about what they presented or what they thought about a talk as from the talk itself.

All in all, if you aren’t working to make the world better for other people, why not? It’s the most fulfilling work one can do AND it helps people. No brainer.

Amy: I am passionate about evaluation. It is a dynamic tool that can be used in multiple ways in multiple contexts. I am especially passionate about evaluation as a means to engage people as citizens within their communities, and as a means to promote greater democratic exchange. Evaluation as democracy in action, effectively, is a particular interest of mine. 

Share on

Evaluation
May 27, 2025

Written by

  • Amy Jersild

    Evaluator and Consultant to IAES Evaluation Team
  • John Gargani

    Adjunct Professor at Claremont Graduate University (USA) and Workshop Facilitator

Related News

Posted on
28 Apr 2025
by
  • Allison Poulos
  • Solomon Adebayo

Driving Change Through Evaluation: Key Insights from PCU’s Solomon Adebayo and Allison Poulos

Posted on
20 Mar 2025

Transforming Evaluation for a Sustainable Future: Learnings from the IDEAS 2025 Conference 

Posted on
15 Oct 2024
by
  • Lea Corsetti
  • Marta Maria Molinari
  • Gaia Gullotta
  • Xiaoxi Lia

Young Emerging Evaluators from CGIAR Take the Stage: Research Analysts at the European Evaluation Society Conference 2024

More News

Related Publications

MELIA cover page
Evaluation Reports & Reviews
Evaluation
Issued on 2025

Summary of Learning on Monitoring, Evaluation, Learning and Impact Assessments (MELIA): Knowledge Product

Evaluation Reports & Reviews
Evaluation
Issued on 2025

Summaries of Learning on CGIAR’s Ways of Working: Terms of Reference

cover page
Strategic & Synthesis Studies
Evaluation
Issued on 2025

Partnerships: Summary of Evaluative Learning on CGIAR’s Ways of Working

More publications

CGIAR Independent Advisory and Evaluation Service (IAES)

Alliance of Bioversity International and CIAT
Via di San Domenico,1
00153 Rome, Italy
  • IAES@cgiar.org
  • (39-06) 61181

Follow Us

  • LinkedIn
  • Twitter
  • YouTube
JOIN OUR MAILING LIST
  • Terms and conditions
  • © CGIAR 2025

IAES provides operational support as the secretariat for the Independent Science for Development Council and the Standing Panel on Impact Assessment, and implements CGIAR’s multi-year, independent evaluation plan as approved by the CGIAR’s System Council.