In the dynamic world of global agricultural research, the CGIAR 2030 Research and Innovation Strategy promotes a multi-faceted approach, aiming to leverage science and partnership to drive positive change across regions, countries, and landscapes. The CGIAR Strategy aims to improve food, land and water systems in a context of climate crisis through three key Action AreasSystems Transformation, Resilient Agrifood Systems (RAFS), and Genetic Innovation. Since 2022, six Regional Integrated Initiatives (RIIs) have been embedded within these Action Areas across the globe to maximize the impact of CGIAR’s work at the national and regional levels. The RIIs have aimed to embody CGIAR's collaborative ethos of fostering a culture of partnership and capacity development. 

In late 2023, the evaluation function of CGIAR's Independent Advisory and Evaluation Services (IAES) commissioned evaluability assessments (EAs) of four of the six RIIs with the objective to assess the quality of their MEL approach, facilitate reflection on capacity and readiness for evaluation, clarify resource needs, and synthesize learnings for organizational improvement. An EA Framework, designed specifically for CGIAR in 2022, facilitated assessment across 6 domains: intervention logic; MEL systems and resources; gender, diversity and inclusion; long-term evaluability; context and environment; and management and key stakeholder engagement and support. These four RII EAs were among the first conducted at CGIAR using the EA Framework.

The concept of evaluability is pertinent to an ambitious and evolving agenda like CGIAR’s. EAs are central to a culture of results within any organization. As the late evaluation theorist Michael Scriven described, evaluability is analogous to requiring serviceability in a new car, and that it may be thought of as “the first commandment in accountability” (Scriven, 1991, p. 138). While an evaluability assessment may be successfully done retroactively (see an example on Climate change adaptation- MacPherson et al, 2022), conducting an evaluability assessment at the design stage in particular facilitates quality assurance and sound development of a monitoring and evaluation approach.

For CGIAR’s ambitious agenda of enabling important scientific information to be adopted into policies, programming, and to influence the behavior of citizens across the globe, sound programs and evaluation readiness are paramount. Quality evaluations support the strengthening of programs, enabling greater support to achievement of CGIAR’s ambitious agenda. 

CGIAR’s evaluation function previously conducted evaluative activities with an ‘evaluability’ lens (see, for example, PRMS Study), and noted concerns affecting evaluability (see, for example, blog on data). The significance of the EA Framework for CGIAR is that it provided a clear roadmap for promoting evaluability in programming, and ensuring that impact remains at the forefront of CGIAR's efforts. In applying the Framework, the EA team analyzed documents, collected primary data through interviews and surveys, and used the Framework in facilitating self-assessments by RII teams. Data analysis involved reflective meetings to draw out themes for each of the four EAs, using the EA Framework as a guide, and to then identify commonalities and differences across the 4 EA results as part of a synthesis. The same EA team conducted both the EAs and the synthesis under IAES's guidance.   

The Synthesis Report identifies common key evaluability-related themes for consideration, which can be summarized as: 

Varying interpretations of and value for evaluation. An unbalanced value for certain evaluative approaches over others underscored a balanced approach to MEL that reflects CGIAR’s overall evaluation strategy and institutional arrangements. 

Problematic definition of the “evaluand”, i.e., that which is to be evaluated. The support and integration of pooled-funded CGIAR interventions at the regional level are intertwined with country-level interventions from non-CGIAR portfolios, and thus it is challenging to identify clear causal links. Related to the development of Theories of Change, understanding specific inputs and resources invested into an evaluand enables understanding their attribution to identified outcomes and impacts, a value that donors often have.

Limited investment in the development of a comprehensive and cohesive MEL framework during inception. The development of a MEL approach during proposal development, rather than during the inception phase of a programme undermines robustness and clarity for programme staff and stakeholders on the role of MEL. The MEL framework that is reflective and analytical in nature, with details on all aspects of the MEL process should be a living document for programme staff reflection and use.

To meet the diverse needs to stakeholders, the EA team organized the EA results into two groups: (1) practical short-term recommendations for planning for evaluation, and strategic long-term recommendations for addressing greater evaluabilty in future RII-like programming as well as CGIAR as a whole; and (2) by the 6 domains of the EA Framework. 

What did the EA team learn from the EA exercise? First-time application of the full EA Framework within CGIAR validated its use and provided a learning opportunity for IAES and participants. While initial meetings were held with CGIAR stakeholders to understand the process and learn about expectations, more can be done to make the process even more utilization focused, such as collectively articulating objectives and questions for the overall EA exercise. Jointly discussing the EA Framework in greater depth to understand how it supports the overall exercise objectives may also help facilitate greater utility and uptake of results. While emphasis is on analysis of programme and MEL documents, greater engagement through onsite interviews and observation could also enhance its learning potential and facilitate greater utility of the results.  

This EA effort has advanced CGIAR’s pathway to improve evaluability, boost evaluation cost-effectiveness, and foster continuous evidence-based learning within the organization. 

Access other CGIAR materials for further information: 

“Evaluability Assessments are an Essential New Tool for CGIAR Managers” October 2022. https://iaes.cgiar.org/evaluation/news/evaluability-assessments-are-essential-new-tool-cgiar-managers

Blog on Engaging with Asian-Mega Deltas Initiative on Assessing Evaluability, April 2024 https://iaes.cgiar.org/evaluation/news/cgiar-through-iaes-engages-irri-asian-mega-deltas-initiative-and-asia-pacific

Recording of CGIAR’s training on evaluability assessments, June 2024- coming soon.

 

References:

MacPherson, R., Jersild, A., Bours, A., Halo, C. (2022). Assessing the Evaluability of Adaptation-Focused Interventions: Lessons from the Adaptation Fund. In: Uitto, J.I., Batra, G. (eds) Transformational Change for People and the Planet. Sustainable Development Goals Series. Springer, Cham. https://link.springer.com/chapter/10.1007/978-3-030-78853-7_12

Scriven, Michael (1991). Evaluation Thesaurus. Sage Publications: Newbury Park.