In February 2022, the CGIAR System Board (23rd Session) and System Council (15th Meeting) approved the new CGIAR Evaluation Framework and revised Evaluation Policy. The two documents apply to CGIAR as an operating entity: CGIAR System Organization, all CGIAR Centers, & all the organizational business units under One CGIAR.

The Evaluation Policy and Framework would help meet CGIAR’s evolving internal needs, supporting an organizational culture that engages with and uses performance and process evaluation for accountability, learning, and evidence-based steering. The Policy revision also aligns with external drivers, such as the 2030 Agenda for Sustainable Development.

The Framework and Policy underscore the importance of leadership commitment to a robust evaluation practice that delivers value to CGIAR and its stakeholders. For leadership, this implies appropriating adequate resources to Monitoring, Evaluation and Learning (MEL) structures and systems and engendering a results-based culture that promotes the effective use of evaluation. The documents emphasize management engagement and response to evaluations as a mechanism to strengthen the use of evaluations by management, governance bodies and, to the extent possible, partners. This fosters greater ownership over the process of change, and ultimately ensures accountability for results.

The Evaluation Function within the Independent Advisory and Evaluation Service (IAES)* has engaged in co-design of the guidelines to operationalize the CGIAR Evaluation Framework and Policy.

To learn more, read the two documents, the blog: “A New Framework and Policy for One CGIAR”, and watch the brief video below.

*Formerly known as CGIAR Advisory Services Shared Secretariat (CAS Secretariat)

 

Frequently Asked Questions (FAQs)**

Background

The CGIAR Evaluation Framework and Policy are anchored on five pillars, one of which is knowledge management and learning, and another is transparent communication of process and results. The spirit and letter of these operational evaluation documents promote learning and transparency. These FAQs have been pooled from questions that were frequently asked during the iterative policy revision process with CGIAR leadership, CGIAR Monitoring, Evaluation and Learning (MEL) practitioners, and other key stakeholders. The FAQs are intended to help operationalize the policy and framework, towards co-developing guidelines. In the course of implementing this Framework and Policy, the Independent Evaluation Function under the IAES welcomes continuous feedback from CGIAR staff and partners. As a custodian of the Policy, feedback and questions can be sent to CAS-Evaluation@cgiar.org.

A. EVALUATION COVERAGE AND SCOPE

How are the new CGIAR Evaluation Framework and revised Evaluation Policy fit-for-purpose?

They both reinforce the use of evaluations for learning, to support decision-making and accountability in One CGIAR, as recommended in the 2019 CGIAR MOPAN Assessment. With CGIAR management and the community of the CGIAR monitoring, evaluation and learning community of practitioners (MELCOP), the Framework and Policy:

  • Enhance attention to quality assurance dimensions for the performance and process evaluation in CGIAR, irrespective of the commissioner;
  • Enhance the capacity of relevant staff who manage evaluations, and of those who have a role in discharging the evaluation practice within the CGIAR system.
  • Reinforce and strengthen the complementarity between evaluation and other PRMF-related functions in One CGIAR.

The Policy covers the evaluation functions of CGIAR exercised by the IAES, the Research Coordination and Performance unit, and the MEL functions of the Initiatives and Platforms. Why are Platform or non-pooled projects evaluations not part of the text?

The evaluation framework and policy guide the conduct of all performance and process evaluations in One CGIAR, including Platform evaluations and evaluations of non-pooled projects.

Why is Impact Assessment (IA) not defined in the Framework and Policy? It appears disconnected from Monitoring Evaluation and Learning.

Impact assessments are explicitly out of the scope of both the CGIAR Evaluation Framework and Policy. All related definitions are provided in the MELIA glossary. Consistent with the Evaluation Policy’s pillar two (of five) “A Holistic and Consistent CGIAR-wide Approach to MEL” Evaluations collect and/or use the best available primary and secondary data supplemented by other data as necessary” which would include evidence from impact assessments.

What have the MEL community identified as their needs and purpose of Eval Policy?

A fit-for-purpose evaluation policy is needed to support the evolving needs and demands of One CGIAR. As a part of the iterative policy revision approach vetted with SIMEC in 2021, the evaluation function under the Independent Advisory and Evaluation Service conducted a survey of CGIAR MEL Community of Practice (MELCOP) in March/April 2021 to gather their feedback as key co-users of the policy and framework. Their responses to the multiple-choice questions indicated the desired top objectives of the evaluation policy:

  • To provide guidance on evaluation principles, approaches and criteria; and requirements of conduct and use of evaluations by programs/Centers;
  • To provide guidance on requirements for conduct and use of evaluations to CGIAR Centers and Initiatives; and
  • To enhance use of evaluation for decision-making.

How should the “CGIAR evaluation criteria” (see Pg. 7 of the policy) be applied during stage-gating?

Stage-gating decisions can be supported by ‘evaluability assessments', which help identify potential bottlenecks and solutions to allow for adjustments towards evaluation readiness. The choice of evaluation criteria (during an evaluability assessment or scoping for an evaluation) would align with the stage-gate at which decisions are being made. While the ‘Effectiveness’ criterion would be key for stage-gating, such criteria as ‘Relevance’ or ‘Sustainability’ may not be. Overall, stage gating is out of scope for the Evaluation Policy, and the related guidelines are being elaborated in relation to the PRMF and the Evaluation policy.  

How do the Evaluation Framework, Policy and its supporting documents (guidelines) interact with the whole stage gating process envisioned?

The Evaluation Framework, Policy, and supporting guidelines are all aligned with the PRMF and stage gating process as detailed in the CGIAR Technical Reporting Arrangement (Pages 16-17). Evidence from evaluations (including independent evaluations per the approved independent evaluation workplan) informs adaptive management and business cycle assurance stage gating processes. The evaluation guidelines which provide implementation support, are not meant to be prescriptive; they are to be tailored to meet the specific circumstances, the unique needs, and decision gates of an Initiative (or intervention). 

B. RELATED AND OVERARCHING FRAMEWORKS

What is the relationship between the Performance and Results Management Framework (PRMF) and Monitoring? Who will receive and implement the PRMF?

The 2030 Research and Innovation Strategy is guided by the new performance and results management framework (PRMF) 2022-2030 to feed in the data and evidence from key areas of work. While the implementation of the 2022-2030 PRMF and its associated framework support an enabling environment and provides the conceptual framework for effective measurement, learning, and accountability from performance and results, the implementation of the PRMF thereof is not within the remit of the Evaluation Framework and Policy.

The Performance and Results Management Framework (PRMF), and Theory of Change are the main instruments to guide evaluations, is there a section in the Policy related to these instruments?

The Evaluation Framework and Policy are rooted in the PRMF, as it guides accountability to funders. Reliance on a theory of change, a theory-based approach, is at the core of research and evaluation. Refer to section 3.2, page 6 of the Evaluation Framework: “Theories of change describe the pathways to impact—which can be complex, intersecting, and often nonlinear—drawing on insights from the social sciences, including economics and international relations theory.

Regarding ISDC, how about evaluation planning at the proposal level? ISDC is responsible for evaluating this in proposals from Initiatives. Why is no reference made to that?

ISDC reviews are not evaluations as defined under the Policy and Framework. The ISDC developed the quality of research for development (QoR4D) framework to review proposals, and its elements align with the evaluation criteria.

C. LEADERSHIP AND ROLES & RESPONSIBILITY IN THE INSTITUTIONAL ARCHITECTURE

There are many MEL professionals in CGIAR, do they all have responsibilities related to evaluative activities? Do all the MEL professionals report to the EMT or a respective Managing Director?

The Evaluation Framework assumes that responsibilities pertaining to evaluation will be carried out by the CGIAR governance, management, the evaluation function of the Independent Advisory and Evaluation Service, and units within the operational structure that pertain to monitoring, evaluation, and learning (MEL) as detailed in current governance frameworks, charters, and similar documents (see Evaluation Framework page 1).

The reporting lines of MEL professionals would need to be clarified in terms of reference of the CGIAR’s Project Coordination Unit (PCU) and Portfolio Performance Unit (PPU) within the Research Delivery and Impact Division.

The Policy makes references to MEL-related services provided by professionals in CGIAR. Do “MEL professionals” (which is a job description, not a job title) have to comply with the Framework and Policy at various levels?

The aim of the CGIAR-wide evaluation practice is to produce evaluations that provide accountability, support decision-making and learning (See also response to a related question on PRMF above). The standards and principles provide a point of reference for the professionalism of research-for-development evaluation and dictate how evaluation is conducted in CGIAR. For technical aspects, yes, evaluation professionals do bear functional responsibilities to comply with a related frameworks and policies, including that of evaluation and Performance and Results Monitoring Framework (PRMF). Of the five pillars of the evaluation Policy, two most related ones are:  

1.   Evaluation Embedded in CGIAR Assurance and Management.

2.   A Holistic and Consistent CGIAR-wide Approach to MEL.

Why is the reference to the “Independent Science for Development Council” or “ISDC” needed in the Framework and Policy?

CGIAR works at the intersection of science and development. The Independent Advisory and Evaluation Service (IAES) comprise the Independent Science for Development Council (ISDC), the Standing Panel on Impact Assessment (SPIA), and an Independent Evaluation Function implemented at the request of CGIAR System Council. Together IAES provides external, impartial, and expert advice related to strategy and positioning, program evaluation, and impact assessment. 

The evaluation criteria of this Policy adhere to the 2019 OECD Development Assistance Committee (OECD/DAC) evaluation criteria and reflect the characteristics of research for development in CGIAR’s context, consistent with the quality of research for development (QoR4D) framework elements, elaborated by ISDC. They facilitate CGIAR System-wide agreement on the nature and assessment of the quality of science, subject to evaluation (see forthcoming guidelines to evaluate QoS).

The Policy states: “Instrumental MEL related services are provided by professionals in CGIAR group operational units which are responsible for project coordination, and portfolio performance reporting ” (Page 9). Is this within the remit of the PCU?

One of the three functions of the Project Coordination Unit (PCU) includes; providing standards and capacity building on project and portfolio management, including MELIA, best practices, and procedures.

Who will be responsible for the delivery and implementation of the policy and framework?

Everyone in CGIAR, including MEL professionals, Initiative Leads, research scientists, Leadership et al is responsible for the delivery and implementation of the policy and framework.

Is there adequate staffing, including designated MEL staff, and adequately resourced budgets to implement the framework and policy in CGIAR?

The policy articulates the roles and responsibilities for the key stakeholders; CGIAR Group Operational Units including MEL professionals, CGIAR, Leadership et al. The Evaluation Framework and Policy underscore the importance of leadership commitment to a robust evaluation practice that delivers value to CGIAR and its stakeholders. For leadership, this implies appropriating adequate resources to Monitoring, Evaluation and Learning (MEL) structures and systems and engendering a results-based culture that promotes the effective use of evaluation for accountability, to promote learning and steer decision-making.

One of the 15 standards and principles of the evaluation framework is “Mutual accountability: In CGIAR, expectations for evaluation are matched with adequate investments in requisite financial and human resources. The capacity and systems for data collection and real-time information underpin mutual accountability” (Evaluation Framework pg. 4).

D. EVALUATION MANAGEMENT

Who decides when to conduct an evaluation and at what level?

Decisions to conduct an evaluation are driven by CGIAR funders, and management, depending on the type and timing of evaluation, and needs within the business cycle (mid-end of intervention, platform, etc.)- see CGIAR reporting arrangement. An evaluability assessment would help decide whether to conduct an evaluation and with what scope.

One of the 15 standards and principles in the Evaluation Framework is evaluability. How does the Framework and policy support evaluability assessments to inform the readiness of evaluation and improve the prospects of evaluation findings?

To support the implementation of the revised evaluation policy and new evaluation framework, the Evaluation Function within the Independent Advisory and Evaluation Service has co-developed guidance; evaluability assessment, evaluating Quality of Science, et al. Through engagement with MEL COP, needs will be identified and addressed by relevant bodies. 

Is CGIAR somewhat responsible for the effectiveness and impacts of its research?

Evaluation criteria serve as the basis upon which evaluative judgments are made to assess CGIAR interventions and processes to provide accountability, support decision-making, and learning. The Policy’s definition of impact and effectiveness encompasses both the OECD DAC criteria and Quality of Research for Development (QoR4D) elements, as applicable to the CGIAR context. Thus, the ”Impact” and “effectiveness” evaluation criteria are within the scope of evaluations covered by this Evaluation Policy and Framework.

E. MANAGEMENT RESPONSE (MR) AND USE OF EVALUATIONS

Most evaluations will be done toward the end of the 3-year initiative funding cycle. Are initiatives responsible for monitoring recommendations that cannot be acted upon until a second funding phase?

The PRMF applies beyond 2024, as does the Evaluation Framework and the Policy. The process of management engagement and response to evaluations will facilitate sustainability and ensure that appropriate, agreed-upon and joined-up institutional vehicles are in place through which follow-up actions will be implemented. 

**FAQs last updated August 2022.