Skip to main content

Main menu

  • Home
  • About
  • Who we are
  • News
  • Events
  • Publications
  • Search

Secondary Menu

  • Independent Science for Development CouncilISDC
    • Who we are
    • News
    • Events
    • Publications
    • Featured Projects
      • Inclusive Innovation
        • Agricultural Systems Special Issue
      • Proposal Reviews
        • 2025-30 Portfolio
        • Reform Advice
      • Foresight & Trade-Offs
        • Megatrends
      • QoR4D
      • Comparative Advantage
  • Standing Panel on Impact AssessmentSPIA
    • About
      • Who We Are
      • Our Mandate
      • Impact Assessment Focal Points
      • SPIA Affiliates Network
    • Our Work
      • Country Studies
        • Community of Practice
        • Bangladesh Study
        • Ethiopia Study
        • Uganda Study
        • Vietnam Study
      • Causal Impact Assessment
        • Call for Expressions of Interest: Accountability and Learning Impact Studies
      • Use of Evidence
      • Cross-Cutting Areas
        • Capacity Strengthening
        • Methods and Measurement
        • Guidance to IDTs
    • Resources
      • Publications
      • Blog Series on Qualitative Methods for Impact Assessment
      • SPIA-emLab Agricultural Interventions Database
    • Activities
      • News
      • Events
      • Webinars
  • Evaluation
    • Who we are
    • News
    • Events
    • Publications
    • Evaluations
      • Science Group Evaluations
      • Platform Evaluations
        • CGIAR Genebank Platform Evaluation
        • CGIAR GENDER Platform Evaluation
        • CGIAR Excellence in Breeding Platform
        • CGIAR Platform for Big Data in Agriculture
    • Framework and Policy
      • Evaluation Method Notes Resource Hub
      • Management Engagement and Response Resource Hub
      • Evaluating Quality of Science for Sustainable Development
      • Evaluability Assessments – Enhancing Pathway to Impact
      • Evaluation Guidelines
  • Independent Science for Development CouncilISDC
  • Standing Panel on Impact AssessmentSPIA
  • Evaluation
Back to IAES Main Menu

Secondary Menu

  • Who we are
  • News
  • Events
  • Publications
  • Evaluations
    • Science Group Evaluations
    • Platform Evaluations
      • CGIAR Genebank Platform Evaluation
      • CGIAR GENDER Platform Evaluation
      • CGIAR Excellence in Breeding Platform
      • CGIAR Platform for Big Data in Agriculture
  • Framework and Policy
    • Evaluation Method Notes Resource Hub
    • Management Engagement and Response Resource Hub
    • Evaluating Quality of Science for Sustainable Development
    • Evaluability Assessments – Enhancing Pathway to Impact
    • Evaluation Guidelines
Blog

2025 Evaluability Assessments: Everything you need to know

You are here

  • Home
  • Evaluation
  • News
  • 2025 Evaluability Assessments: Everything you need to know

 The CGIAR 2025-2030 Portfolio Evaluability Assessments 

As CGIAR launches its new research Portfolio for 2025–2030, ensuring that each program and accelerator is ready for robust, credible evaluation is more important than ever. That’s where the Evaluability Assessments (EAs) come in. Led by the Independent Advisory and Evaluation Service (IAES), these assessments provide actionable recommendations to strengthen monitoring, evaluation, learning, and impact assessment (MELIA) systems across the Portfolio. Conducted in three phases, the 2025 EAs will set the stage for midline evaluations in 2027—offering key insights to improve program design, track progress, and enhance CGIAR’s contribution to global development goals.

Explore the most frequently asked questions below to learn more about their purpose, process, and value for CGIAR programs.

  1. What is the purpose and expected outputs of these Evaluability Assessments? 

The Evaluability Assessment (EA) aims to provide actionable recommendations for improved MELIA components for each of the programs and accelerators (P/As), while preparing the Portfolio for midline evaluations in 2027. Both the EA and mid-line evaluations of CGIAR’s 2025-30 Portfolio are  included in the 2025-27 work plan for CGIAR's Independent Advisory and Evaluation Service (IAES) (SC/M21/DP5), and have been approved by the System Council. An update on the EA process was shared at SC22 (link). 

  1. Who are key stakeholders in the Evaluability Assessments? 

Key stakeholders include program and accelerator staff, implementing and any supported communities. Donors and CGIAR management, including PPU and PCU, are also key stakeholders. 

  1. How coordinated is the EA process with the ISDC reviews and the PPU and the PCU?

In line with the 2025 EA Terms of Reference, in April-May, the EA partially assessed Individual P/A draft ToC and MELIA Plans (shared with PPU on 23 May). The partial EA applied Domains A and B from the EA framework (Figure 1) referencing the 2023 EAs of the Regional Integrated Initiatives (RIIs) (link). They were cross-referenced with the EA initial review of PPU’s guidance on ToCs and MELIA plans. The EA presented to ISDC reviewers as part of their induction for the review of P/A Inception Reports (link). ISDC reviews are one of key documents for the full EA in August-October 2025. 

  1. How will the EAs of the Portfolio be implemented?

TheIAES’ EA Guidelines (also in Spanish) is the primary resource for the EA process. The EA framework includes six core domains, with a seventh – focused on cost and efficiency (Figure 1) – recently added at the funders’ request. Each domain is supported by guiding questions to inform implementation.

The EAs support the inception of the 2025-2030 Portfolio, and is being implemented in three phases: inception, implementation and synthesis (for SC23). The inception phase (March-July) included document reviews and data collection, and a light-touch application of the EA Framework to support the development of the P/A Inception Report. The inception phase concludes with EA Inception Notes for each P/A, developed in collaboration with their teams to guide the approximately 10-day implementation phase (Table 1). The EA process would: validate the EA purpose and scope; Identify and involve stakeholders; apply the EA Framework; collect key documents, identify potential evidence gaps, and engage the evaluand; decide on evaluation readiness and make recommendations (e.g., in preparation for 2027 mid-line evaluations of P/As); and use EA results (for the inception and roll-out in 2027).  Final EA reports will detail results, that build on ISDC’s review of the Inception Reports. An EA synthesis report will present an analysis of all 13 P/As to the SC23. 

Table 1. Estimated iterative 10-day implementation per Science Program/Accelerator 

DayKey Processes and Steps
1-3 Review of P/A documents; interviews; engage with self—assessment EA results per P/A
4-6 Workshop (hybrid format to permit the broadest possible F2F inclusion; or entirely virtual); online as short as 1-2 hours or as long as one-half day, pending agreement in the IN for each EA. The EA team and P/A staff will combine homework and come together to collaborate and co-produce. 
7-9 Drafting agreed upon deliverable(s): short and actionable EA report; validation of recommendations.
9-10 Finalizing deliverable based on P/A staff feedback; response from P/A management
  1. Who is conducting the Evaluability Assessments? 

The core EA team includes IAES staff and consultants: a Team Lead evaluation consultant experienced in EAs, and a subject matter expert, with additional thematic expertise provided per P/A. To facilitate participation, strengthen program understanding, and build capacity, each program’s CGIAR MEL focal point has been engaged and invited to join the EA team. 

  1. What is the level of effort required from the P/A staff for this exercise?

The EA implementation phase of the full EA Framework will be detailed in the Inception Notes for each of the 13 P/As (Table 1). The EA workplan will be tailored and iterative, with timelines reflecting scheduling demands, agreed level of effort, and coordination among the EA Team and the P/A staff. 

Frequently Asked Questions on EAs

  1. What is evaluability? 

At CGIAR, evaluability is defined as “the extent to which an intervention can be evaluated in a reliable and credible fashion” (CGIAR Evaluation Framework, p. 1). The concept of evaluability is regarded as central to fostering a culture of results. While the evaluability of a program can be assessed at any stage in the project cycle, focus on evaluability at the design and inception stage will prepare for overall ‘measurability’, monitoring and subsequent evaluation.

  1. Why are EAs important for programs? 

An EA prepares programs for evaluation to ensure that program goals are agreed on, information needs are well-defined, evaluation data are obtainable, and intended users are willing to use evaluation results. 

  1. What is an evaluation? 

The most widely accepted definition of evaluation is by theorist Michael Scriven (1991): “Evaluation is the systematic process to determine merit, worth, value or significance” of the evaluand, or that which is being evaluated (p. 139). - thereby emphasizing the role of values in evaluation.

Explore

  • FAQ: Evaluation and Evaluability Assessments (EA)
  • Evaluability Assessments Resource Hub
  • Blog: Evaluability Assessments are an Essential New Tool for CGIAR Managers

 

Reference:

Scriven, M. (1991). Evaluation Thesaurus (4th edition). Sage Publications. 

 

 

Share on

Evaluation
Jul 23, 2025

Written by

Related News

Posted on
19 Jun 2025
by
  • Diana Cekova
  • Lea Corsett

Can AI Help Us Evaluate Better? Exploring the Opportunities and Challenges

Posted on
11 Jun 2025
by
  • Patrick Caron
  • Ibtissem Jouini

Comunicar la verdad al poder: El papel de las evaluaciones independientes y la Junta de la Alianza Integrada para impulsar un cambio significativo en el CGIAR.

Posted on
11 Jun 2025
by
  • Patrick Caron
  • Ibtissem Jouini

Le rôle des évaluations indépendantes et du conseil d’administration du Partenariat intégré dans la promotion d’un changement positif au sein de CGIAR.

More News

Related Publications

Reference Materials
Evaluation
Issued on 2025

Terms of Reference: Evaluation Reference Group

Technical Notes
Evaluation
Issued on 2025

Considerations and Practical Applications for Using Artificial Intelligence (AI) in Evaluations

Reference Materials
Evaluation
Issued on 2025

Terms of Reference: Summaries of Learning on CGIAR’s Ways of Working

More publications

CGIAR Independent Advisory and Evaluation Service (IAES)

Alliance of Bioversity International and CIAT
Via di San Domenico,1
00153 Rome, Italy
  • IAES@cgiar.org
  • (39-06) 61181

Follow Us

  • LinkedIn
  • Twitter
  • YouTube
JOIN OUR MAILING LIST
  • Terms and conditions
  • © CGIAR 2025

IAES provides operational support as the secretariat for the Independent Science for Development Council and the Standing Panel on Impact Assessment, and implements CGIAR’s multi-year, independent evaluation plan as approved by the CGIAR’s System Council.