When independent evaluators speak truth to power, what happens next?
The true value of independent evaluations lies in whether their recommendations lead to action. This blog explores how independent evaluations are used, the critical role of management responses (MRs), and how governance bodies, such as CGIAR Integrated Partnership Board (IPB), can help drive meaningful change. As CGIAR advances towards its 2030 Research and Innovation Strategy, CGIAR Vice Chair of the Integrated Partnership Board (IPB) Dr. Patrick Caron and Senior Evaluation Manager at the Independent Advisory and Evaluation Service (IAES) Ibtissem Jouini share their reflections on how evaluations can inform and influence strategic decisions.
What is the Board’s role in driving evidence-based change?
Dr. Patrick Caron: As CGIAR moves into its new 2025-30 Research Portfolio, which is designed to accelerate and strengthen the implementation of its 2030 Strategy, capitalizing on lessons learned and building on successful practices will be essential. To this end, insights from monitoring and evaluation must be systematically applied and learning broadly shared across the organization.
The CGIAR IPB, as the governing body of the CGIAR System Organization and the Integrated Partnership, plays a central role in providing strategic leadership and governance to advance CGIAR’s mission. The Board oversees governance, approves strategic direction, and assesses performance. It ensures that CGIAR’s work is guided by evidence-based decision-making, financial oversight, and programmatic alignment with the 2030 Strategy.
The Integrated Partnership itself is a federated alliance of CGIAR centers and the System Organization, united by shared values and principles. Through integration and coordination, it aims to create synergies that amplify both individual and collective contributions toward fulfilling the ambition and purpose of the CGIAR System.
What is the value of independent evaluation?
Ibtissem Jouini: Independent evaluations serve as tools for both accountability and learning. They provide objective advice, challenge the status quo, and support continuous improvement. However, their value is only fully realized when lessons, findings, and recommendations are actively considered and implemented.
As highlighted by the Multilateral Organization Performance Assessment Network (MOPAN), many organizations (including CGIAR) face challenges in embedding evidence-based planning and ensuring that evaluations meaningfully inform decision-making. By systematically tracking, addressing, and implementing evaluation recommendations, organizations signal a strong commitment to using evidence to enhance effectiveness, performance and impact.
How can an organizational culture of learning be fostered, where evidence-based planning and programming is applied?
Dr. Patrick Caron: As CGIAR advances its new research agenda, its ability to learn from past efforts-through independent evaluations and reviews-will determine its success. Strengthening MRs and reinforcing the role of the Board in ensuring that recommendations are addressed are key steps toward achieving lasting impact.
An institution that does not adapt cannot survive. In my experience as a former Director General for Research and Strategy at CIRAD, I have always considered the outcome of independent evaluations as essential components of the learning and adapting process. Evaluations can serve multiple, and at times divergent, purposes: accountability to bring confidence in the system to governors, management and donors; and organizational learning to strengthen programming capacity and foster a learning culture based on reflection and continuous improvement. Meeting both objectives raises the level of ambition and calls for a multifunctional approach to the design of monitoring, evaluation and MR systems. This may require designing distinct processes and methods that serve different needs, provided they do not overburden the system.
Why is a solid MR system needed and how is CGIAR dealing with MRs?
Ibtissem Jouini: A well-designed MR system ensures that recommendations are not merely noted, but actively considered in decision-making processes, leading to more informed and evidence-based actions as per the MOPAN Indicator Framework. This fosters a culture of continuous improvement and strengthens the accountability to stakeholders to provide transparency on how evaluation findings are being used to drive organizational change. Formalized MR systems are widely adopted among development cooperation organizations.
To strengthen the use and engagement with evaluations across CGIAR and support the formalization of MRs and uptake of recommendations, the IAES Evaluation Function developed a guideline, Management Engagement and Response (MER): Process and Performance Evaluations in CGIAR, which puts into practice the key principles in the CGIAR Evaluation Framework and Policy. Complementing this from the management side, the Portfolio Performance Unit (PPU) developed a CGIAR internal Process Note for responsible business units and entities, to ensure that recommendations from independent evaluations are systematically tracked, addressed, and implemented.
Progress of implementation of MR actions is documented annually in the CGIAR Internal Practice Change (Type 3) Report. In 2022, the PPU also introduced a MR Actions Tracking Tool to track the implementation of MRs to IAES’ independent process and performance evaluations.
What can IPB members do to foster evidence-based decision-making?
Dr. Patrick Caron: A robust MR system, which is characterized by adaptability and continuous improvement, is essential to ensure that learning from evaluations leads to tangible change and strengthens accountability to both funders and the communities which CGIAR serves. Within CGIAR, the IPB plays a critical role in this process. IPB members review and endorse MRs, helping to ensure that evaluation findings inform strategic decision-making. Assessing how management responds to, and implements evaluation recommendations is a component of the IPB oversight and performance assessment responsibility.
What did the review reveal about the CGIAR MR system?
Ibtissem Jouini: While acknowledging CGIAR’s progress in establishing an MR System for independent evaluations since 2021, and recognizing ongoing efforts to strengthen it, the external review team identified several systemic challenges. Although various processes exist to capture and share lessons learned, their overall effectiveness is constrained by key gaps. This limits CGIAR’s ability to fully leverage evaluation insights for continuous learning and strategic decision-making.
Currently, there is no formal mechanism linking the implementation of recommendations with decision-making or program design. This weakens CGIAR’s capacity to demonstrate how evaluations contribute to organizational learning or inform strategic adjustments. Operational challenges, including transitions and overlapping responsibilities, further complicate implementation efforts. The lack of a unified monitoring, evaluation, and learning (MEL) system across centers exacerbates the fragmentation in tracking and follow-up. The positive development is that a new reporting arrangement is underway. This aims to strengthen linkages between monitoring, evaluation, impact assessments, and foresight functions across centers, programs, and independent bodies to support a more cohesive and effective system.
Organizational change and transition can affect the relevance and applicability of evaluation recommendations, as highlighted by the IAES Benchmarking Review and confirmed by the external review of CGIAR’s MR system. Establishing a system that is both robust and responsive is essential to navigating these challenges effectively and ensuring that recommendations remain actionable and aligned with evolving priorities.
The review team proposes three key recommendations, each supported by a set of actionable and time-bound measures:
- Improve the prioritization and feasibility of recommendations.
- Enhance the technical modalities for tracking recommendations and MR actions.
- Foster a culture of learning by streamlining processes and reducing fragmentation across centers, enabling CGIAR to better leverage insights from independent evaluations.
What are the other uses of independent evaluations?
Dr. Patrick Caron: The importance of independent evaluations is gaining recognition, particularly as public aid funding becomes more constrained, and investments are subject to greater scrutiny to ensure they make a real difference. Evaluation findings play a critical role in informing the design of new programs, prompting further analysis, and identifying gaps that require targeted research and innovation to maximize impact.
From my perspective, independent evaluation findings must be presented in a way that is accessible, timely, and directly linked to decision-making processes. Evaluations should not only inform strategic direction but also guide resource allocation and policy priorities, ensuring that evidence consistently drives meaningful and results-oriented action.
Ibtissem Jouini: While research on the factors influencing the use of evaluations continues to evolve, several best practices have emerged to enhance their uptake. These include producing more accessible formats such as short reports, briefs, and videos and offering one-on-one briefings to engage stakeholders effectively and deeply. Discussions at evaluation conferences and webinars, such as those organized by the Use of Evaluation Working Group (UNEG), highlighted the persistent challenges organizations face in ensuring that evaluations are meaningfully used. Participatory approaches, such as workshops to refine or co-create recommendations, also gained attention as strategies to foster ownership and relevance. Importantly, even the most rigorous and high-quality evaluations do not guarantee that their findings and recommendations will be used-underscoring the need for deliberate efforts to embed evaluation use within decision-making processes.
A quote from Internal Auditor at ICARDA Antonio Villamor illustrates the broader utility of independent evaluations:
“Truth be told, the Big Data Platform Evaluation Report is a rich material that we at the Internal Audit Function should regularly refer to when we speak about Research Data Management. Additionally, it helped me to refocus and repurpose the initially approved [audit] assurance engagement towards Metadata Management.”
Explore the Management Engagement and Response (MER) Resource Hub