The Conference on Multi-Dimensional Evaluation for Influence and Transformation in Rome from 4-6 March 2025 brought together global experts to discuss how evaluation can drive sustainable change by shaping policy and influencing real-world outcomes. Co-organized by the International Development Evaluation Association (IDEAS) and the Independent Evaluation Office of the New Development Bank, the event featured over 36 sessions and nine professional development workshops along five strands, including climate action, gender equity, and the Sustainable Development Goals (SDGs).
CGIAR’s Independent Advisory and Evaluation Service (IAES) facilitated and participated in some of these conversations, shaping discussions on approaches, methods and content of evaluations. We also brought CGIAR’s research-for-development context into the evaluation realm. Here is a closer look at the key insights and takeaways.
Learnings From The Professional Development Workshops: Foresight and Scaling in Development
The IAES Director Allison Smith and lead of Evaluation Function, Svetlana Negroustoueva were thrilled to explore the application of Foresight thinking and Methods to Evaluation. Deepening their understanding of ‘foresight’ and its role in evaluation theory and practice through real-world case studies helped build on related work by the Independent Science for Development Council, particularly Foresight and Trade-Off Implications for One CGIAR. Two foresight methods—the future wheel and alternative scenarios—were identified as relevant to CGIAR’s future evaluation practice, informing evaluation recommendations and expanding adaptive evaluation approaches to be more strategic and forward-thinking.
To enhance learning, IAES Senior Evaluation Manager, Ibtissem Jouini and consultant on evaluability assessments (EAs) Amy Jersild participated in the workshop Scaling Impact: New Ways to Plan, Manage, and Evaluate Scaling, facilitated by Dr. John Gargani, former member of the CGIAR Evaluation Reference Group and author of Scaling Impact: Innovation for the Public Good. The workshop provided insights into approaches for scaling the social and environmental impacts of interventions, research, and investments; and introduced key concepts and practical tools for applying the principles of scaling and identifying different pathways to scale. Hands-on exercises and discussions emphasized the importance of articulating a scaling theory of change that conveys the logic of scaling and identifies scaling risks, and the role of dynamic evaluation systems in adapting to complex environments. A checklist outlining key dimensions for evaluating CGIAR’s scaling efforts served as a concrete example to enrich group discussions. Participants agreed that scaling is more than just growth—it is about ensuring that innovations and interventions achieve meaningful and lasting impact.

Evaluability Assessments and Synthesis as Practical Tools for Transformative Evaluation
The multi-agency session The Case for Assessing Evaluability: Building Capacities for Transformative M&E Practice, hosted by Amy Jersild and Gaia Gullotta (CGIAR), Carlos Tarazona (FAO), and Peter Wichmand (IDEAS Board, retired from ILO) (featured on the cover picture), highlighted the growing role of evaluability assessments (EAs) in ensuring needs-driven and actionable evaluation practices. CGIAR’s EA framework facilitates targeted engagement between monitoring and evaluation (M&E) for continuous learning and adaptation [Explore the Evaluability Assessments Hub]. 'EAs assess readiness and boost evaluation effectiveness,' noted Jersild. FAO integrates EAs early in project design to ensure clear resource allocation, while the ILO example demonstrated how EAs connect to the SDGs and national processes.
Continuing the discussions on transformative evaluation, the Roundtable Synthesis–More Than the Sum of Its Parts: Enhancing Use of Evidence to Accelerate Action and Progress Toward the SDGs focused on making evidence timely, accessible, and actionable. Panelists included Aurelie Lamoyer (head of Use Unit), Judith Friedman (head of Global and Synthesis Unit, WFP Office of Evaluation), Daniel Ortega (director, Contributions and Evaluation at CAF, Development Bank of Latin America), Dr. Isabel Fletcher (Data for Science & Health, Wellcome Trust), and Svetlana Negroustoueva (lead of CGIAR Independent Evaluation Function).
Panelists explored mixed methods approaches to synthesis, referencing the UNEG Synthesis Working Group's guidance. Synthesizing is crucial for capturing progress, identifying challenges, and informing decision-making to accelerate the SDGs and roles on voluntary national reviews (VNRs). Insights from the CGIAR such as lessons from the Synthesis on Quality of Science, focused on integrating research into evaluation syntheses and enhancing collaboration among agencies. An example of collaboration is the joint publication Evidence Pathways to Gender Equality and Food Systems Transformation (2021) across Rome-based agencies. Panelists agreed that synthesis improves the depth, quality, and efficiency of evaluations for faster and informed decisions, as Dr. Isabel Fletcher remarked: ‘Leave no evidence behind’ as well as to not allow ‘perfect be the enemy of the good’, underscoring timelines and utility of synthesizing across agencies and topics.
The discussion focused on ensuring that evaluation synthesis serves as a practical tool for decision-making. A key topic was the role of AI in evidence synthesis, which generated both enthusiasm for its potential and concerns about credibility and governance. The importance of collaboration, open knowledge, and providing actionable evidence tailored to decision-makers’ needs was emphasized. Lamoyer pressed on the need for making evidence accessible, “We must move from generating evidence to ensuring it’s used where it matters most”.

AI and Technology for Evaluation
AI and emerging technologies were a hot topic in many sessions, underscoring their expanding role in evaluation. IAES Evaluation Function colleagues participated in the workshop AI for Transformative Evaluation, facilitated by Fiona Kastel (3ie), which examined AI’s potential to enhance human capabilities. Discussions focused on generative AI, machine learning, and the ethical considerations of integrating these technologies into evaluation practices. A key takeaway was that AI should serve as a catalyst for creativity and efficiency, augmenting rather than replacing human expertise. As Kastel put it, ‘AI should automate repetitive tasks, not replace creativity’.
CGIAR participants also engaged with experts from GEF IEO, ESA, and ADB during the session Utilizing Advanced Technologies for Evaluating SDGs, which explored how geospatial science, Earth observation, and AI enhance SDG evaluations. Satellite imagery enables tracking progress and even ‘seeing the past’—offering insights beyond what field observations alone can provide. Ensuring equitable access to data and tools remains crucial to leaving no one behind. As AI continues to evolve, its role in research, evaluation, and decision-making is becoming increasingly fast-moving and transformative. Whether through sentiment analysis, intelligent chatbots, or advanced data retrieval mechanisms, AI is poised to reshape how we interact with and utilize knowledge—topics to be explored in CGIAR’s upcoming Methods Note.
CGIAR’s Contribution to Transformative Evaluations of Climate Change and Gender
CGIAR’s Independent Evaluation Function Lead, Svetlana Negroustoueva, along with Dr. Archi Rastogi, (chief evaluation advisor at the Independent Evaluation Unit, GCF), Eskender Zeleke (head of the Independent Evaluation Function, AIIB), and Stefano D’Errico (head of Monitoring, Evaluation and Learning, IIED), hosted the session From Evidence to Action: How Can Evaluators Effectively Contribute to Transformational Change for Climate Action? Negroustoueva introduced CGIAR’s approach for evaluating Quality of Science (QoS) to deepen and refine evaluations at the research-for-development context and beyond, including towards climate change and contributing to the SDGs. Independent evaluator, Natascia Palmieri praised the role of technical subject matter experts, researchers and scientists beyond social science, and their role in operationalizing the QoS conceptual framework into the Science Group Evaluations in CGIAR. Key takeaways included the need for flexible and adaptive evaluation frameworks that can keep pace with climate crises; and complementing the use of evaluative evidence alongside scientific evidence in the Intergovernmental Panel on Climate Change (IPCC), “Evaluation must be adaptive and forward-looking to drive real policy impact,” emphasized Dr. Rastogi.

In the session Feminist Evaluation: Practical Applications, IDEAS Board members Fabiola Amariles, Sonal Zaveri (Evaluation Reference Group member), and Svetlana Negroustoueva, presented the Feminist Evaluation Framework (FE) developed by Dr. Donna Podems emphasizing its role in understanding change and advocating for gender equity. Negroustoueva presented insights from the GENDER Platform Evaluation which informed development of the FE (Learn more). Echoing messaging from other sessions, a need for deliberate use of participatory and context-aware methodologies was reiterated: “We want to talk about data and evidence for those who are oppressed,” said Amariles.
Inclusion and participatory insights also emerged at the workshop Evaluation Ecosystems: Building Culturally Relevant and Context-Adapted Methodologies for Transformational Impact, led by the evaluation offices of NDB and IFAD. There, our colleague from IAES's Independent Science for Development Council (ISDC), Ines Gonzalez De Suso Iraola, reflected on the importance of not only considering who will use evaluations but also who helps shape them. Perspectives shift depending on who is in the room—a Kenyan sheep farmer participated in an evaluation design discussion, offering insights that reshaped the approach. Such moments underscore the need for diverse voices from the very start.

Closing Thoughts: Evaluations Driving Transformative Change and CGIAR’s Role in Bridging the Gap
The 2025 IDEAS/NDB Evaluation Conference created a significant space for new and seasoned evaluation professionals striving to learn, share knowledge and discuss the evolving role of evaluation with the emphasis on equity and inclusivity, and the integration of new technologies. The conference highlighted that the future of evaluation lies in innovation, collaboration, and a commitment to driving transformative change through the use of diverse types of evaluative evidence and its presentation.
The IAES Evaluation Function of CGIAR continues to be at the forefront of these conversations, ensuring that evaluation standards and practices apply to its research-for-development context and evaluative learning is useful for decision making. Likewise, the path of embedding attention to nuanced approaches to evaluating science and research can enrich and maximize potential of evaluations as a powerful force for change.
Interested in learning more? Dive deeper into CGIAR IAES latest evaluation insights and explore our publication library and news.
Acknowledgements
This piece is a co-created effort by the CGIAR IAES office: Svetlana Negroustoueva, Ibtissem Jouini, Marta Maria Molinari, Gaia Gullota, Diana Cekova, Ines Gonzalez de Suso, and Marilyn Sanchez. Special thanks to Amy Jersild, John Gargani, and Natascia Palmieri, along with all colleagues who enriched the conference with their insights and collaboration.