Skip to main content

Main menu

  • Home
  • About
  • Who we are
  • News
  • Events
  • Publications
  • Search

Secondary Menu

  • Independent Science for Development CouncilISDC
    • Who we are
    • News
    • Events
    • Publications
    • Featured Projects
      • Inclusive Innovation
        • Agricultural Systems Special Issue
      • Proposal Reviews
        • 2025-30 Portfolio
        • Reform Advice
      • Foresight & Trade-Offs
        • Megatrends
      • QoR4D
      • Comparative Advantage
  • Standing Panel on Impact AssessmentSPIA
    • About
      • Who We Are
      • Our Mandate
      • Impact Assessment Focal Points
      • SPIA Affiliates Network
    • Our Work
      • Country Studies
        • Community of Practice
        • Ethiopia Study
        • Uganda Study
        • Vietnam Study
      • Causal Impact Assessment
        • Call for Expressions of Interest: Accountability and Learning Impact Studies
      • Use of Evidence
      • Cross-Cutting Areas
        • Capacity Strengthening
        • Methods and Measurement
        • Guidance to IDTs
    • Resources
      • Publications
      • Blog Series on Qualitative Methods for Impact Assessment
      • SPIA-emLab Agricultural Interventions Database
    • Activities
      • News
      • Events
      • Webinars
  • Evaluation
    • Who we are
    • News
    • Events
    • Publications
    • Evaluations
      • Science Group Evaluations
      • Platform Evaluations
        • CGIAR Genebank Platform Evaluation
        • CGIAR GENDER Platform Evaluation
        • CGIAR Excellence in Breeding Platform
        • CGIAR Platform for Big Data in Agriculture
    • Framework and Policy
      • Evaluation Method Notes Resource Hub
      • Management Engagement and Response Resource Hub
      • Evaluating Quality of Science for Sustainable Development
      • Evaluability Assessments – Enhancing Pathway to Impact
      • Evaluation Guidelines
  • Independent Science for Development CouncilISDC
  • Standing Panel on Impact AssessmentSPIA
  • Evaluation
Back to IAES Main Menu

Secondary Menu

  • About
    • Who We Are
    • Our Mandate
    • Impact Assessment Focal Points
    • SPIA Affiliates Network
  • Our Work
    • Country Studies
      • Community of Practice
      • Ethiopia Study
      • Uganda Study
      • Vietnam Study
    • Causal Impact Assessment
      • Call for Expressions of Interest: Accountability and Learning Impact Studies
    • Use of Evidence
    • Cross-Cutting Areas
      • Capacity Strengthening
      • Methods and Measurement
      • Guidance to IDTs
  • Resources
    • Publications
    • Blog Series on Qualitative Methods for Impact Assessment
    • SPIA-emLab Agricultural Interventions Database
  • Activities
    • News
    • Events
    • Webinars
C. de Bode/CGIAR
Blog

Building a Portfolio of Impact Studies for Accountability and Learning: Progress, Lessons, and Opportunities

You are here

  • Home
  • Standing Panel on Impact AssessmentSPIA
  • News
  • Building a Portfolio of Impact Studies for Accountability and Learning: Progress, Lessons, and Opportunities

As I hand over the SPIA Senior Manager role to Ricardo Labarta, it seems like a good time to provide an update to the broader CGIAR Impact Assessment community on how SPIA is supporting impact assessment studies. SPIA’s new approach to impact assessment in AR4D is grounded in methodological advances and tailored to CGIAR’s possibly-unique structure and mandate. SPIA sets out two types of impact studies: accountability studies designed to measure long-term and/or large-scale impacts of past research; and learning studies designed to rigorously test hypotheses about impacts of recent innovations in pilot or scaling stages. The main difference between these studies is how the results are intended to be used. Since I joined CGIAR in the late 1990s, the system has moved a long way towards internalizing the focus on outcomes and impacts as well as (not instead of) outputs. No one would think of proposing research that wasn’t supported by evidence of potential impact on development outcomes, however both the generation and the interpretation of that evidence often leaves something to be desired. SPIA’s current mandate is to strengthen both of those areas.

One of the areas in which SPIA advanced significantly during 2019 was in funding accountability-oriented studies. From the 68 expressions of interest reviewed, the five studies that have so far been approved for funding will together go a long way towards filling gaps in the quality of evidence on types of research and outcomes:

  • Evaluating the Restoration of the Commons: An integrated impact assessment of a large-scale land restoration initiative in India founded on the tenets of collective action and property rights (ICRAF, IFPRI and Foundation for Ecological Security (FES))
  • Long-term diffusion and impacts of Index Based Livestock Insurance (ILRI)
  • Impact assessment of stress-tolerant rice varieties: Evidence from South Asia and Sub-Saharan Africa (IRRI and University of Arizona)
  • Rangeland Health and Index-Based Livestock Insurance: Innovations in Measurement and Evaluation (Cornell University, USDA, and ILRI)
  • Impact of a Second-Generation Conservation Agriculture Technology (Happy Seeder) on Crop Residue Burning and Air Quality in Northwestern Indo-Gangetic Plains (CIMMYT, Indian Institute Science Education and Research (IISER), and University of Michigan)

The proposals for the approved studies can be found here. More proposals are still in the pipeline and could add climate services, fish, legumes, and dryland cereals to the list of CGIAR outputs.    

Perhaps the most surprising thing about the process of funding these studies was how hard it was. This is not a criticism of the research nor the study teams. Rather, it is evidence of just how hard it is to do a rigorous study of the impacts of research at scale. In nearly all cases, such studies propose quasi-experimental approaches. To do this work well—to provide evidence to support the assumptions that necessarily underlie such approaches—requires data and information from before the innovations were available about how they rolled out over time and space. If this information wasn’t collected at the time—by a government agency, NGO or research team—then it is virtually impossible to reconstruct it. That does not mean that impact did not happen. It just means that it cannot be documented rigorously.

Fortunately, in some cases data on the roll out of innovations is available and such data could, with a relatively small amount of investment, become even more common. When I started working on impact assessment in CGIAR, it was not uncommon to use available data from governments, NGOs or even private sector on things like seed produced/distributed/sold, people trained, organizations established, policymakers briefed, etc., as proxies for adoption and even impact. As we became aware that it was not appropriate to use such output-level measures to say anything about outcomes, less emphasis was placed on obtaining or reporting these data. Re-valuing data on dissemination of innovations at scale and recognizing their value to the design of quasi-experimental studies could make these data more widely available and hence improve the number and quality of impact assessments. It could also reduce pressure on program monitoring units and MELIA specialists to go beyond carefully documenting and understanding activities and outputs to conducting adoption or outcome surveys which, due to scarce resources, can rarely be done at policy-relevant scales.  

SPIA is now embarking on efforts with centers and dissemination partners to improve how monitoring data are gathered and used to inform the design of impact assessments. These efforts complement SPIA’s ongoing work with national data systems and other partners to integrate questions on adoption of priority CGIAR innovations into the large-scale, ongoing household and community-level surveys.

Learning-oriented studies are designed to generate information that can feed into the R&D process. Demonstrating unequivocally that innovations work and are likely to deliver intended outcomes to farmers, consumers or other end users in real-world conditions – this is the responsibility of the research program and should be part of stage-gating at each. HarvestPlus continues to provide one of the best examples of how to do this well and effectively. . 

In the process of rigorously testing innovations under real-world conditions, researchers often learn about features of the innovation that could be barriers to uptake in the short run. Testing whether bespoke dissemination strategies, designed to account for such challenging features, can enhance uptake and impact is an essential part of research on scaling. To address this question, SPIA issued a call for innovations in late 2019 and is currently in the process of matching nearly a dozen CGIAR study teams with external impact assessment specialists to develop proposals.

The portfolio of studies that SPIA supports will primarily be selected for the quality of individual studies and for their ability to speak collectively to more general issues. However, given the timing it is also possible that priority questions emerging from the One CGIAR research agenda can be used to inform the selection of the portfolio of learning-oriented studies. How would that work? Learning-oriented study results will be available by 2024 and could feed into stage-gating decisions to be taken at that time. The challenge is to match the kinds of stage-gate questions that the new CGIAR projects will likely face in three years time—based on their current Theories of Change and research and scaling plans—with the results that are expected to come from learning-oriented studies. Ideally these questions would be spelled out in the One CGIAR research agenda with sufficient specificity to know whether the study results are likely to be relevant. But even if they aren’t, working through this exercise would be valuable not only for selecting the most useful portfolio of learning-oriented studies but also as a practical example of what it means to plan for and use IA results to inform research and scaling decisions in CGIAR.

Share on

Impact SPIA
May 28, 2020

Written by

  • Nancy Johnson

    SPIA Consultant

Related News

Posted on
28 Apr 2025
by
  • Tanguy Bernard
  • Nefisa Zekaria
  • Kalaeb Baye
  • ADD in Ethiopia team

ADD-IN Ethiopia - Insights from the kick-off meeting

Posted on
23 Apr 2025
by
  • Sujata Visaria
  • Paula Lozano-Ortiz

Reflections from Science Week: SPIA’s Latest Insights on Agricultural Innovation and Impact

Posted on
15 Apr 2025

SPIA Welcomes New Panel Member

More News

Related Publications

cover
Evaluation Reports & Reviews
Impact SPIA
Issued on 2025

Evaluation of SPIA’s 2019-2024 Program of Work: Final report

Briefs
Impact SPIA
Issued on 2025

SPIA Brief Ethiopia Report 2024: Building Resilience to Shocks

Reference Materials
Impact SPIA
Issued on 2025

SPIA Uganda Report 2025 (First Version)

More publications

CGIAR Independent Advisory and Evaluation Service (IAES)

Alliance of Bioversity International and CIAT
Via di San Domenico,1
00153 Rome, Italy
  • IAES@cgiar.org
  • (39-06) 61181

Follow Us

  • LinkedIn
  • Twitter
  • YouTube
JOIN OUR MAILING LIST
  • Terms and conditions
  • © CGIAR 2025

IAES provides operational support as the secretariat for the Independent Science for Development Council and the Standing Panel on Impact Assessment, and implements CGIAR’s multi-year, independent evaluation plan as approved by the CGIAR’s System Council.