Skip to main content

Main menu

  • Home
  • About
  • Who we are
  • News
  • Events
  • Publications
  • Search

Secondary Menu

  • Independent Science for Development CouncilISDC
    • Who we are
    • News
    • Events
    • Publications
    • Featured Projects
      • Inclusive Innovation
        • Agricultural Systems Special Issue
      • Proposal Reviews
        • 2025-30 Portfolio
        • Reform Advice
      • Foresight & Trade-Offs
        • Megatrends
      • QoR4D
      • Comparative Advantage
  • Standing Panel on Impact AssessmentSPIA
    • About
      • Who We Are
      • Our Mandate
      • Impact Assessment Focal Points
      • SPIA Affiliates Network
    • Our Work
      • Country Studies
        • Community of Practice
        • Bangladesh Study
        • Ethiopia Study
        • Uganda Study
        • Vietnam Study
      • Causal Impact Assessment
        • Call for Expressions of Interest: Accountability and Learning Impact Studies
      • Use of Evidence
      • Cross-Cutting Areas
        • Capacity Strengthening
        • Methods and Measurement
        • Guidance to IDTs
    • Resources
      • Publications
      • Blog Series on Qualitative Methods for Impact Assessment
      • SPIA-emLab Agricultural Interventions Database
    • Activities
      • News
      • Events
      • Webinars
  • Evaluation
    • Who we are
    • News
    • Events
    • Publications
    • Evaluations
      • Science Group Evaluations
      • Platform Evaluations
        • CGIAR Genebank Platform Evaluation
        • CGIAR GENDER Platform Evaluation
        • CGIAR Excellence in Breeding Platform
        • CGIAR Platform for Big Data in Agriculture
    • Framework and Policy
      • Evaluation Method Notes Resource Hub
      • Management Engagement and Response Resource Hub
      • Evaluating Quality of Science for Sustainable Development
      • Evaluability Assessments – Enhancing Pathway to Impact
      • Evaluation Guidelines
  • Independent Science for Development CouncilISDC
  • Standing Panel on Impact AssessmentSPIA
  • Evaluation
Back to IAES Main Menu

Secondary Menu

  • Who we are
  • News
  • Events
  • Publications
  • Evaluations
    • Science Group Evaluations
    • Platform Evaluations
      • CGIAR Genebank Platform Evaluation
      • CGIAR GENDER Platform Evaluation
      • CGIAR Excellence in Breeding Platform
      • CGIAR Platform for Big Data in Agriculture
  • Framework and Policy
    • Evaluation Method Notes Resource Hub
    • Management Engagement and Response Resource Hub
    • Evaluating Quality of Science for Sustainable Development
    • Evaluability Assessments – Enhancing Pathway to Impact
    • Evaluation Guidelines
close up of germinating seeds
Blog

Evaluating Quality of Research for Development in Process and Performance Evaluations: CGIAR approach

You are here

  • Home
  • Evaluation
  • News
  • Evaluating Quality of Research for Development in Process and Performance Evaluations: CGIAR approach

The world is at a critical juncture that requires robust science and innovation to fight the climate crisis and transform food, land and water systems. For 50 years, CGIAR has been evolving its research and evaluation practices to address global challenges, with a focus in recent years on Sustainable Development Goals (SDGs). As research has evolved from component technologies to systems transformation, the need to evaluate CGIAR research remains critically important.

CGIAR’s Independent Advisory and Evaluation Service (IAES) implements CGIAR’s multi-year, independent evaluation plan and provides operational support as the secretariat for the Independent Science for Development Council (ISDC) and the Standing Panel on Impact Assessment (SPIA) as approved by the System Council. 

Framing Evaluation of Quality of Science

The evaluation function under IAES is a custodian of the CGIAR-wide Evaluation Framework and Policy (2022) two documents underscore the alignment and value of evidence-based advisory and recommendations provided by the independent evaluation function and ISDC. The methods and criteria for evaluating CGIAR research are underpinned by two fundamental principle-based frameworks crucial to CGIAR and its stakeholders: the Quality of Research for Development Frame of Reference (QoR4D; last revision 2021), and the 2019 OECD Development Assistance Committee (OECD/DAC) Evaluation Criteria. A global standard in the evaluation of international development co-operation, the OECD/DAC criteria provide a basis for making an evaluative judgment. The Quality of Research for Development in the CGIAR Context (Qo4RD) framework  facilitated CGIAR System-wide agreement on the nature and assessment of the quality of scientific research, a concept broadened beyond scientific credibility to include the likelihood of achieving development outcomes. The QoR4D framework guides and enhances the quality of R4D at all levels, from strategy to research activities.

The new guidelines on applying the QoR4D elements to process and performance evaluations provide the framing, criteria, dimensions and menu of methods for assessing the quality of CGIAR research for development. Consistency and common approaches to evaluating quality of CGIAR research and science will support CGIAR to deliver its mission and implement its 2030 Research and Innovation Strategy. 

The quality of science (QoS) is an important component of the QoR4D framework. As such, evaluating QoS is a crucial component in learning on how to better align science with the broader research for development agenda i.e. linking science, technology and innovation, with developmental challenges and community needs. While complex, evaluating science quality is essential for three main reasons: 

  1. It provides accountability for public and private investment before further research is undertaken.
  2. It informs our funders about the quality of the scientific processes and the scientific outputs.
  3. It provides evidence about how CGIAR science contributes towards its institutional objectives and how our work will contribute to the achievement of the SDGs. 

Why are evaluation guidelines important?

The CGIAR Evaluation Framework and Policy (2022) define evaluation as the systematic and objective assessment of an ongoing or completed project, program, initiative or policy, or operational modality in CGIAR, its design, implementation, and results, rooted in the OECD definition. Rigorous, independent, external evaluations are foundational to CGIAR’s effort to inform the design of interventions, provide actionable evidence to support management and governance decisions, and ensure a high level of accountability to donors.

The standards and principles in the Evaluation Framework provide a point of reference for the professionalism of research-for-development evaluation and dictate how evaluation is conducted in CGIAR. The new guidelines will directly support evaluators, evaluation managers and commissioners involved in process and performance evaluations of research and science in CGIAR, and similar contexts. The process of evaluating science quality in the context of R4D initiatives is a rapidly evolving field, and the Guide will be subject to revisitation and revision regularly.

The primary objectives of the Applying the CGIAR Quality of Research for Development Framework to Process and Performance Evaluation, are to: 

  • Facilitate a common understanding of the QoS evaluation criterion including in relation to other evaluation criteria.
  • Outline a common approach to evaluating science quality and provide a menu of methods.
  • Cross-reference between the ISDC ex-ante measures and Evaluation Policy measures for midline/ex-post evaluation.
  • Underline the roles and responsibilities to facilitate evaluating QoS in CGIAR.

In developing the Quality of Science evaluation criterion, which is described in detail in the Evaluators’ Guide, QoR4D design principles and assessment criteria have been integrated, whist also ensuring complementarity with OECD/DAC criteria. This has been done considering CGIAR’s positioning in the research-for-development space.

Quality of Science: The QoS evaluative criterion pertains to scientific credibility and legitimacy. The definition of the criterion is derived from the QoR4D frame of reference, which records CGIAR’s System-wide agreement on the nature and assessment of research quality. The QoR4D describes research quality according to four key elements: relevance, scientific credibility, legitimacy, and effectiveness.[1] Relevance and Effectiveness are treated as separated evaluation criteria above.

The guidelines showcase a comprehensive approach to the evaluation of QoS by employing four interlinked dimensions: research design, inputs, management processes and outputs. The four dimensions within QoS evaluation allow evaluators to produce more rigorous evaluations, allowing for a more objective and systematic assessment of an intervention’s capacity to advance towards system-wide impacts and the SDGs. 

The role of professional evaluators is crucial to ensure consistent application of evaluation processes. Meanwhile, the seven evaluation criteria from the CGIAR Evaluation Policy provide structure to the substantive focus for evaluation questions. As one of the seven criteria, the QoS evaluation criterion reflects the identity of CGIAR as global research for development partner. 

Moving Forward

Science and innovation is crucial to tackling the climate crisis, global food system challenges and sustainable development. Reliable, comprehensive and scrutinized research methodologies lie at the heart of QoR4D, pushing our CGIAR evaluators, evaluation managers and commissioners to reflect on how science and research align with the SDGs while promoting sustainable development practices around the world. In advancing the quality of research and science via updated evaluation methods, CGIAR will be able to accelerate the impact of its initiatives to achieve a world free from hunger, poverty and environmental degradation.

[1] A co-designed guideline on evaluating the quality of science in CGIAR details the approach and methods for operationalizing the QoS evaluation criterion of this Policy.

Share on

Evaluation
Dec 19, 2022

Written by

  • Svetlana Negroustoueva, CGIAR Advisory Services

    Svetlana Negroustoueva

    Evaluation Function Lead, IAES
  • Andrew Ash

    ISDC Member

Related News

Posted on
28 Apr 2025
by
  • Allison Poulos
  • Solomon Adebayo

Driving Change Through Evaluation: Key Insights from PCU’s Solomon Adebayo and Allison Poulos

Posted on
10 Apr 2025
by
  • Patrick Caron
  • Ibtissem Jouini

Speaking Truth to Power: The Role of Independent Evaluations and Integrated Partnership Board in Driving Meaningful Change in CGIAR

Posted on
24 Mar 2025
by
  • Cristiano Rossignoli
  • Moogdho Mahzab

Strengthening MELIA: Insights from Cristiano Rossignoli (WorldFish) & Moogdho Mahzab (IFPRI) on CGIAR Evaluations

More News

Related Publications

MELIA cover page
Evaluation Reports & Reviews
Evaluation
Issued on 2025

Summary of Learning on Monitoring, Evaluation, Learning and Impact Assessments (MELIA): Knowledge Product

Evaluation Reports & Reviews
Evaluation
Issued on 2025

Review of CGIAR Management Response System to Independent Evaluations

Technical Notes
Evaluation
Issued on 2025

Social Network Analysis For The Evaluation Of Development Interventions: A Methods Note

More publications

CGIAR Independent Advisory and Evaluation Service (IAES)

Alliance of Bioversity International and CIAT
Via di San Domenico,1
00153 Rome, Italy
  • IAES@cgiar.org
  • (39-06) 61181

Follow Us

  • LinkedIn
  • Twitter
  • YouTube
JOIN OUR MAILING LIST
  • Terms and conditions
  • © CGIAR 2025

IAES provides operational support as the secretariat for the Independent Science for Development Council and the Standing Panel on Impact Assessment, and implements CGIAR’s multi-year, independent evaluation plan as approved by the CGIAR’s System Council.