In the first in a series of blogs on the CGIAR Research Program (CRP) 2020 Review Process, Svetlana Negroustoueva, Senior Evaluation Manager, CAS Secretariat, and Ravi Ram, Evaluation Specialist, and former CAS Evaluation Consultant, reflect on emerging learnings from the CRP 2020 Review process so far and explain what taking a ‘lean approach’ to evaluation means in practice.

2020: A year of CRP Reviews

In 2020, the CAS Secretariat Evaluation Function is coordinating the independent review process of the twelve CGIAR Research Programs (CRPs). As we write, the first three reviews are completed and the next three are well underway. All twelve will be completed by the end of the year.

These reviews come at the request of the CGIAR System Council via its Strategic Information, Monitoring & Evaluation Committee (SIMEC). As CGIAR undergoes transformative change through the One CGIAR transition process, evaluations of its agricultural research provide a necessary and ideal opportunity for accountability and learning.

In this blog, we will discuss how generating evidence from twelve large research programs in a short time frame meant that we needed to innovate to define a rapid process that could yield high quality findings.

Taking a lean approach

A lean approach to evaluation has increasingly become a priority in light of limited resources and shown potential in light of COVID-19 as it brings top-level evidence to decision-makers in a short turn-around time. But what exactly do we mean by a ‘lean evaluation approach’?

It is an approach nested within a well-accepted definition of evaluation as “any systematic process to judge merit, worth or significance by combining evidence and values”.

For the 2020 CRP Reviews, we applied the CGIAR’s Quality of Research for Development Framework: relevance, scientific credibility, legitimacy, and effectiveness. The approach also relied on established documentation and evidence, generating limited new data.

We also set the following working principles:

  • Restrict the scope to two of the six CGIAR evaluation criteria (effectiveness, and quality of science) from the CGIAR evaluation policy - in particular, the limited scope of two criteria positioned this exercise as evaluative reviews rather than full evaluations.
  • Prioritize accountability over learning 
  • Reduce the burden on the CGIAR Research Programs being evaluated
  • Limit primary data collection and have a larger reliance on secondary data, including self-reported monitoring data from each program
  • Have independent external evaluative review teams that include a professional evaluator and a subject matter expert in the CRP-specific area.

Standardizing and harmonizing the data

A key difference in the organization of the evaluative reviews compared to conventional evaluations was in the level of evaluation management by the CAS Secretariat, before and during the reviews.

To promote a level of standardization and harmonization of data quality across the twelve reviews, we pre-analyzed two sets of data: (1) performance monitoring data collected by CRPs (e.g., the rate of achieving planned milestones, innovations) and (2) bibliometric data (citations, author productivity, impact estimates, etc.) based on the publications from each program.

We provided the pre-analyzed data to the external review teams in a standardized format, with no interpretation. This preserved the independence of the evaluative reviews - more about this in forthcoming blogs. With a short timeframe for course correction, we also maintained frequent contact with the CRP contacts and the evaluative review teams, and further implemented quality assurance checklists and midpoint check discussions with the teams. After-action reviews based on the first round identified lessons to refine the evaluative review process.

Pros and cons of the approach – early conclusions

Based on early results from the first three CRP Reviews, the initial performance of the evaluative reviews compared to conventional evaluations shows strong potential and could apply beyond CGIAR and agricultural research programs.

Some emerging comparative benefits include the reduced cost in both time and money, a quick turnaround with a good return on investment time from evaluation teams, and its usefulness for accountability-focused mandates, with some window for learning.

Yet this approach cannot offer the same scope and depth as a full evaluation, for example, missing nuances in analysis against a wider scope of evaluation criteria or in-depth exploration of cross-cutting themes. It also requires proportionately more management and oversight, as well as substantial pre-analysis, which may not be suitable in other contexts. Further, the evaluative review approach leaves little time for corrective action and is constrained in addressing learning. Time limitations make it challenging to apply desired quality assurance procedures involving multiple reviews of deliverables. Moreover, users’ expectations must be managed through regular communications, as the findings from the evaluative reviews are by design much more limited than those of conventional evaluations.

Overall, the evaluative reviews appear to satisfy top-level accountability requirements, while also documenting some system-level learning. For effective learning about specific research programs or more comprehensive analysis for accountability, however, the evaluative reviews are not appropriate, and a full evaluation is indispensable.

Find out more

We invite you to read reports from the three reviews completed in July 2020

You can also find lots more information on the CRP Reviews 2020 webpage and in our Frequently Asked Questions.

Stay tuned for the release of the other nine CRP review reports during the next few weeks, as well as further updates on lessons learned on the application of the Quality of Science criteria, the importance of data quality and the use of a theory-based approach for evaluating effectiveness.

Svetlana Negroustoueva, Senior Evaluation Manager, CAS Secretariat

Ravi Ram, Evaluation Specialist, Former CAS Evaluation Consultant