Skip to main content

Main menu

  • Home
  • About
  • Who we are
  • News
  • Events
  • Publications
  • Search

Secondary Menu

  • Independent Science for Development CouncilISDC
    • Home
    • Who we are
    • News
    • Events
    • Publications
    • Featured Projects
      • Inclusive Innovation
        • Agricultural Systems Special Issue
      • Proposal Reviews
        • 2025-30 Portfolio
        • Reform Advice
      • Foresight & Trade-Offs
        • Megatrends
      • QoR4D
      • Comparative Advantage
  • Standing Panel on Impact AssessmentSPIA
    • About
      • Who we are
      • Our Mandate
      • Impact Assessment Focal Points
      • SPIA Affiliates Network
    • Our Work
      • Country Studies
        • Community of Practice
        • Bangladesh Study
        • Ethiopia Study
        • Uganda Study
        • Vietnam Study
      • Causal Impact Assessment
      • Use of Evidence
      • Cross-Cutting Areas
        • Capacity Strengthening
        • Methods and Measurement
        • Guidance to IDTs
    • Resources
      • Publications
      • Blog Series on Qualitative Methods for Impact Assessment
      • SPIA-emLab Agricultural Interventions Database
    • Activities
      • News
      • Events
      • Webinars
  • Evaluation
    • Who we are
    • News
    • Events
    • Publications
    • Evaluations
      • Science Group Evaluations
      • Platform Evaluations
        • CGIAR Genebank Platform Evaluation
        • CGIAR GENDER Platform Evaluation
        • CGIAR Excellence in Breeding Platform
        • CGIAR Platform for Big Data in Agriculture
    • Framework and Policy
      • Evaluative Learning Hub
      • Evaluation Method Notes Resource Hub
      • Management Engagement and Response Resource Hub
      • Evaluating Quality of Science for Sustainable Development
      • Evaluability Assessments – Enhancing Pathway to Impact
      • Evaluation Guidelines
  • Independent Science for Development CouncilISDC
  • Standing Panel on Impact AssessmentSPIA
  • Evaluation
Back to IAES Main Menu

Co-development and validation to move From Ambition to Preparedness: Strengthening Evaluability Across 2025-30 Portfolio

You are here

  • Home
  • Evaluation
  • Events
  • Co-development and validation to move From Ambition to Preparedness: Strengthening Evaluability Across 2025-30 Portfolio

Making CGIAR’s Portfolio Evaluation-ready by Design, not by Chance

The Evaluation Function appreciated the engagement of every participant in the 2025 Evaluability Assessments (EAs) of CGIAR’s 2025–30 Portfolio. Consult the FAQs, aligned to the TORs for further clarity (MYEP 2025–27, SC/M21/DP5). 

After months of engagement, from the needs assessments in February (link), to the Science Week in April (link), with the alignment with ISDC (blog) and the collaboration with Project Coordination Unit (PCU), the intense collection and analysis reached a pivotal milestone to make the 2025–30 Portfolio evaluation-ready by design. 

Throughout October and November, the co-development of recommendations, an enhanced approach to validation of evaluation results as per Review of CGIAR MR System to Independent Evaluations [SC decision: M21-EDP9; on 18 April 2025] led to the finalization of the EA synthesis recommendations.

Two core activities occurred:

  1. The Nairobi co-development workshop, held at the ILRI campus in October, took place on the sidelines of the MELIA-F project launch. The Evaluation Function brought together CGIAR stakeholders and subject-matter experts engaged in the selected EAs to close the data-collection phase and jointly synthesize and co-develop recommendations. The MELIA specialists, program teams, accelerator leads, center representatives, SMEs and colleagues from the Portfolio Performance and Project Coordination Units engaged to make sense of emerging findings and co-create actionable recommendations, facilitated by the Evaluation Function Lead, Svetlana Negroustoueva. 

Drawing on case insights from the Digital Transformation Accelerator, the Policy Innovation Program and the Scaling for Impact Program, participants examined the strengths and constraints that shape evaluability across the CGIAR portfolio. They glimpsed recurring themes emerging from the preliminary system-level analysis and traced how design logic, MELIA, inclusion, finance and learning all interact across CGIAR. Breakout groups stress-tested feasibility under tight budgets, prioritized “low-hanging fruits” and differentiated the essential minimum standards from aspirational “gold standards”.

EA Team Lead, Amy Jersild highlighted common emerging issues: 

  • ambitious but under-tested theories of change;
  • technically strong yet fragmented MELIA systems;
  • inclusion stated as a principle but weakly measured;
  • short-term horizons masking long-term change;
  • evaluation readiness assumed rather than intentionally planned;
  • learning valued but not institutionalized;
  • and legacy funding complicating value-for-money analysis.

 

  1. The virtual validation meeting, in November brought the process full circle. EA team and portfolio stakeholders reconvened to test, refine and confirm the recommendations. With 13 EAs completed, the synthesis revealed a clear picture: CGIAR is structurally evaluable, but functionally fragmented. The intent is strong, but the connecting systems still need to catch up. Eight recommendations and suggested actions were further enriched.

Finally, discussions during the consultative group meeting with funders and CGIAR management, together with a brief presentation at the MEL COP re-launch on the final set of four recommendations and accompanying a menu of options, helped move closer to the next steps: endorsement by SIMEC and the convening of MR development and tracking of responses by the PPU.

Across all moments, one message resonated strongly: evaluability is a collective responsibility. It requires a shared commitment to building a portfolio capable of generating credible, comparable, and actionable evidence for learning and accountability. By tightening causal logic, integrating MELIA and finance, making inclusion measurable, planning for long-term evidence, and embedding learning in governance, CGIAR is laying the foundations for a portfolio that not only delivers results, but can demonstrate value with credibility and confidence through 2030 and beyond. 

Read more and Stay tuned for report release: 

  • Update to SC22
  • CGIAR Science Week 2025: Assessing the Evaluability of CGIAR's 2025-2030 Portfolio

To learn more about Evaluability Assessments, visit the Evaluability Assessment Resource Hub.

Share on

Evaluation
2025
14 October
until Nov 26

Related Publications

Guidelines
Evaluation
Issued on 2025

Making CGIAR Evaluations Gender Responsive and Inclusive

Evaluation Reports & Reviews
Evaluation
Issued on 2025

Risk Management and Resilience: Summary of Learning on CGIAR’s Ways of Working

Guidelines
Evaluation
Issued on 2025

Management Engagement and Response: Process & Performance Evaluations in CGIAR

More publications

Related News

Blog
Evaluation
24 Jul 2025

From Uganda to the Drawing Board: Listening, Learning and Shaping Evaluations

Blog
Evaluation
23 Jul 2025

2025 Evaluability Assessments: Everything You Need to Know

Blog
Evaluation
19 Jun 2025

Can AI Help Us Evaluate Better? Exploring the Opportunities and Challenges

More News

CGIAR Independent Advisory and Evaluation Service (IAES)

Alliance of Bioversity International and CIAT
Via di San Domenico,1
00153 Rome, Italy
  • IAES@cgiar.org
  • (39-06) 61181

Follow Us

  • LinkedIn
  • Twitter
  • YouTube
JOIN OUR MAILING LIST
  • Terms and conditions
  • © CGIAR 2025

IAES provides operational support as the secretariat for the Independent Science for Development Council and the Standing Panel on Impact Assessment, and implements CGIAR’s multi-year, independent evaluation plan as approved by the CGIAR’s System Council.