Skip to main content

Main menu

  • Home
  • About
  • Who we are
  • News
  • Events
  • Publications
  • Search

Secondary Menu

  • Independent Science for Development CouncilISDC
    • Who we are
    • News
    • Events
    • Publications
    • Featured Projects
      • Inclusive Innovation
        • Agricultural Systems Special Issue
      • Proposal Reviews
        • 2025-30 Portfolio
        • Reform Advice
      • Foresight & Trade-Offs
        • Megatrends
      • QoR4D
      • Comparative Advantage
  • Standing Panel on Impact AssessmentSPIA
    • About
      • Who We Are
      • Our Mandate
      • Impact Assessment Focal Points
      • SPIA Affiliates Network
    • Our Work
      • Country Studies
        • Community of Practice
        • Bangladesh Study
        • Ethiopia Study
        • Uganda Study
        • Vietnam Study
      • Causal Impact Assessment
        • Call for Expressions of Interest: Accountability and Learning Impact Studies
      • Use of Evidence
      • Cross-Cutting Areas
        • Capacity Strengthening
        • Methods and Measurement
        • Guidance to IDTs
    • Resources
      • Publications
      • Blog Series on Qualitative Methods for Impact Assessment
      • SPIA-emLab Agricultural Interventions Database
    • Activities
      • News
      • Events
      • Webinars
  • Evaluation
    • Who we are
    • News
    • Events
    • Publications
    • Evaluations
      • Science Group Evaluations
      • Platform Evaluations
        • CGIAR Genebank Platform Evaluation
        • CGIAR GENDER Platform Evaluation
        • CGIAR Excellence in Breeding Platform
        • CGIAR Platform for Big Data in Agriculture
    • Framework and Policy
      • Evaluation Method Notes Resource Hub
      • Management Engagement and Response Resource Hub
      • Evaluating Quality of Science for Sustainable Development
      • Evaluability Assessments – Enhancing Pathway to Impact
      • Evaluation Guidelines
  • Independent Science for Development CouncilISDC
  • Standing Panel on Impact AssessmentSPIA
  • Evaluation
Back to IAES Main Menu

Secondary Menu

  • Who we are
  • News
  • Events
  • Publications
  • Evaluations
    • Science Group Evaluations
    • Platform Evaluations
      • CGIAR Genebank Platform Evaluation
      • CGIAR GENDER Platform Evaluation
      • CGIAR Excellence in Breeding Platform
      • CGIAR Platform for Big Data in Agriculture
  • Framework and Policy
    • Evaluation Method Notes Resource Hub
    • Management Engagement and Response Resource Hub
    • Evaluating Quality of Science for Sustainable Development
    • Evaluability Assessments – Enhancing Pathway to Impact
    • Evaluation Guidelines
Harvesting vegetables in Bangladesh. M. Rahman/WorldFish
M. Rahman/WorldFish
Blog

Evaluative Reviews: A Streamlined Approach to Accountability and Learning

You are here

  • Home
  • Evaluation
  • News
  • Evaluative Reviews: A Streamlined Approach to Accountability and Learning

In the first in a series of blogs on the CGIAR Research Program (CRP) 2020 Review Process, Svetlana Negroustoueva, Senior Evaluation Manager, CAS Secretariat, and Ravi Ram, Evaluation Specialist, and former CAS Evaluation Consultant, reflect on emerging learnings from the CRP 2020 Review process so far and explain what taking a ‘lean approach’ to evaluation means in practice.

2020: A year of CRP Reviews

In 2020, the CAS Secretariat Evaluation Function is coordinating the independent review process of the twelve CGIAR Research Programs (CRPs). As we write, the first three reviews are completed and the next three are well underway. All twelve will be completed by the end of the year.

These reviews come at the request of the CGIAR System Council via its Strategic Information, Monitoring & Evaluation Committee (SIMEC). As CGIAR undergoes transformative change through the One CGIAR transition process, evaluations of its agricultural research provide a necessary and ideal opportunity for accountability and learning.

In this blog, we will discuss how generating evidence from twelve large research programs in a short time frame meant that we needed to innovate to define a rapid process that could yield high quality findings.

Taking a lean approach

A lean approach to evaluation has increasingly become a priority in light of limited resources and shown potential in light of COVID-19 as it brings top-level evidence to decision-makers in a short turn-around time. But what exactly do we mean by a ‘lean evaluation approach’?

It is an approach nested within a well-accepted definition of evaluation as “any systematic process to judge merit, worth or significance by combining evidence and values”.

For the 2020 CRP Reviews, we applied the CGIAR’s Quality of Research for Development Framework: relevance, scientific credibility, legitimacy, and effectiveness. The approach also relied on established documentation and evidence, generating limited new data.

We also set the following working principles:

  • Restrict the scope to two of the six CGIAR evaluation criteria (effectiveness, and quality of science) from the CGIAR evaluation policy - in particular, the limited scope of two criteria positioned this exercise as evaluative reviews rather than full evaluations.
  • Prioritize accountability over learning 
  • Reduce the burden on the CGIAR Research Programs being evaluated
  • Limit primary data collection and have a larger reliance on secondary data, including self-reported monitoring data from each program
  • Have independent external evaluative review teams that include a professional evaluator and a subject matter expert in the CRP-specific area.

Standardizing and harmonizing the data

A key difference in the organization of the evaluative reviews compared to conventional evaluations was in the level of evaluation management by the CAS Secretariat, before and during the reviews.

To promote a level of standardization and harmonization of data quality across the twelve reviews, we pre-analyzed two sets of data: (1) performance monitoring data collected by CRPs (e.g., the rate of achieving planned milestones, innovations) and (2) bibliometric data (citations, author productivity, impact estimates, etc.) based on the publications from each program.

We provided the pre-analyzed data to the external review teams in a standardized format, with no interpretation. This preserved the independence of the evaluative reviews - more about this in forthcoming blogs. With a short timeframe for course correction, we also maintained frequent contact with the CRP contacts and the evaluative review teams, and further implemented quality assurance checklists and midpoint check discussions with the teams. After-action reviews based on the first round identified lessons to refine the evaluative review process.

Pros and cons of the approach – early conclusions

Based on early results from the first three CRP Reviews, the initial performance of the evaluative reviews compared to conventional evaluations shows strong potential and could apply beyond CGIAR and agricultural research programs.

Some emerging comparative benefits include the reduced cost in both time and money, a quick turnaround with a good return on investment time from evaluation teams, and its usefulness for accountability-focused mandates, with some window for learning.

Yet this approach cannot offer the same scope and depth as a full evaluation, for example, missing nuances in analysis against a wider scope of evaluation criteria or in-depth exploration of cross-cutting themes. It also requires proportionately more management and oversight, as well as substantial pre-analysis, which may not be suitable in other contexts. Further, the evaluative review approach leaves little time for corrective action and is constrained in addressing learning. Time limitations make it challenging to apply desired quality assurance procedures involving multiple reviews of deliverables. Moreover, users’ expectations must be managed through regular communications, as the findings from the evaluative reviews are by design much more limited than those of conventional evaluations.

Overall, the evaluative reviews appear to satisfy top-level accountability requirements, while also documenting some system-level learning. For effective learning about specific research programs or more comprehensive analysis for accountability, however, the evaluative reviews are not appropriate, and a full evaluation is indispensable.

Find out more

We invite you to read reports from the three reviews completed in July 2020

  • Agriculture for Nutrition and Health (A4NH)
  • Grains, Legumes and Dryland Cereals (GLDC)
  • WHEAT

You can also find lots more information on the CRP Reviews 2020 webpage and in our Frequently Asked Questions.

Stay tuned for the release of the other nine CRP review reports during the next few weeks, as well as further updates on lessons learned on the application of the Quality of Science criteria, the importance of data quality and the use of a theory-based approach for evaluating effectiveness.

Svetlana Negroustoueva, Senior Evaluation Manager, CAS Secretariat

Ravi Ram, Evaluation Specialist, Former CAS Evaluation Consultant

Share on

Evaluation
Sep 23, 2020

Related News

Posted on
28 Apr 2025
by
  • Allison Poulos
  • Solomon Adebayo

Driving Change Through Evaluation: Key Insights from PCU’s Solomon Adebayo and Allison Poulos

Posted on
10 Apr 2025
by
  • Patrick Caron
  • Ibtissem Jouini

Speaking Truth to Power: The Role of Independent Evaluations and Integrated Partnership Board in Driving Meaningful Change in CGIAR

Posted on
24 Mar 2025
by
  • Cristiano Rossignoli
  • Moogdho Mahzab

Strengthening MELIA: Insights from Cristiano Rossignoli (WorldFish) & Moogdho Mahzab (IFPRI) on CGIAR Evaluations

More News

Related Publications

MELIA cover page
Evaluation Reports & Reviews
Evaluation
Issued on 2025

Summary of Learning on Monitoring, Evaluation, Learning and Impact Assessments (MELIA): Knowledge Product

cover page
Strategic & Synthesis Studies
Evaluation
Issued on 2025

Partnerships: Summary of Evaluative Learning on CGIAR’s Ways of Working

Assessments & Commentaries
Evaluation
Issued on 2025

MELIA Needs Assessment Results: Assessing the Evaluability of CGIAR’s 2025-30 Portfolio

More publications

CGIAR Independent Advisory and Evaluation Service (IAES)

Alliance of Bioversity International and CIAT
Via di San Domenico,1
00153 Rome, Italy
  • IAES@cgiar.org
  • (39-06) 61181

Follow Us

  • LinkedIn
  • Twitter
  • YouTube
JOIN OUR MAILING LIST
  • Terms and conditions
  • © CGIAR 2025

IAES provides operational support as the secretariat for the Independent Science for Development Council and the Standing Panel on Impact Assessment, and implements CGIAR’s multi-year, independent evaluation plan as approved by the CGIAR’s System Council.