Since April 2021, a series of activities have taken place as part of SPIA’s offer to provide advice to One CGIAR Initiative Design Teams (IDTs) in the development of their impact assessment plans. This page summarizes the SPIA guidance and provides answers to some frequently asked questions.

FAQs on SPIA Advice to Impact Assessment Plans of CGIAR Initiatives

Which methods are recommended by SPIA?

Initiatives are very different and there is no “one size fits all” when it comes to impact assessment methods. The appropriate choice of methods depends on which research questions are important for the impact assessment of each initiative to answer. SPIA can support in matchmaking initiatives and people with relevant methodological expertise.

What is the difference between the MEL and IA parts of the MELIA plan?

Impact assessment is research designed from the start of the agricultural research for development process and as an integral part of it. Impact assessment is based on establishing a credible counterfactual. This separates impact assessment from monitoring, which looks at establishing facts.

Monitoring, evaluation and learning (MEL) and impact assessment (IA) are different efforts but can complement each other. From the impact assessment side, cooperation in documenting rollout of innovations and policy advise provides an opportunity to design better studies of long-term and large-scale impacts.

An obstacle in designing such studies is lack of information on where, when, how, why and to whom innovations were rolled out or policy advise activities took place. Access to reliable information on rollout is crucial for designing a convincing identification strategy. Integrating this data collection from the start of the scaling process makes it less costly and time consuming, than going back to look for the data at a later stage.

How is Theory of Change used in impact assessment?

Impact assessment tests causal relationships, and for the purposes of impact assessment studies it is important that theory of change carefully specifies the causal pathways that are hypothesized to lead to impact. Impact assessment studies should then be designed to specifically test steps in the causal pathway for which there is the most uncertainty. If impact is not reached or only partially reached, impact assessment studies can provide input on where it failed in the causal chain, and can test and provide evidence on complementary actions that can help increase impact.

How do we best use the limited space for impact assessment in the proposal template?

Out of the many questions in the theory of change that you can test, it will be impossible to conduct rigorous impact assessment studies on all of them. It is therefore important to prioritize studies that are testing key assumptions for your initiative.

How should the initiatives prioritize learning studies versus accountability studies considering the 3-year time frame?

The PRMF it is clear that initiative impact assessment plans should include studies that test the assumptions underlying the ToC. Within a 3-year time frame it is reasonable to expect results from learning studies that can provide valuable input for decision making in research and scaling systems, allowing to learn & adapt in order to increase 10-year impacts.

There will be inititatives that are scaling innovations or policy advise already, but getting impact estimates on SDG outcomes within a 3-year period is too ambitious. You need a lot of statistical power to get estimates of effects on such outcomes. Since an underpowered study is a bad investment, it is better to make sure that we have the building blocks to obtain rigorous long-term large-scale estimates in the future. It is important to define the important questions to answer from the start and to prepare to answer those questions with the appropriate method. Don’t leave these questions for the end of the process when it will be too late to collect some of the data needed.

Is there a place for cost-benefit analysis in the impact assessment plan?

For a specific innovation we may be interested in calculating the cost to the individual farmer of using a technology and comparing it to the farmer’s benefit of the technology. This can be a relevant learning question to ask in order to understand why certain farmers adopt and other do not.

When thinking of the system as a whole, then need to depart from the fact that science is an activity with high degree of uncertainty. Out of all the research done a few project will have big wins. Estimating the benefits of those big wins can justify investment in the system as a whole. This is the idea for the initiatives. Estimating rates of return to each activity will not be feasible, but documenting rollout of potential big wins sets you up well to obtain rigorous estimates of impact that may be enough to justify investment in the full set of research activites.

What is SPIA’s advice on the appropriate role of RCTs in the impact assessment plans?

Randomized Control Trials (RCTs) are typically learning studies and allow to test short-term impacts. At the same time, they can become the base  be for more long-term studies. There is an example in the current SPIA portfolio (IBLI), where the team can now look at 10-year impacts building on variation created from the RCT. But we know that for many of the innovations and technologies these types of RCTs don’t exist and so it is not always possible to build the long-term impact assessment on these.

However, if we can do the RCT at the moment when we are scaling, we are making things easier for the future. So for the initiatives that do have scaling efforts there is a lot to say for getting an RCT expert involved in talks with scaling partners to see if it is possible to design an RCT.

What can be done at the initiative level and where are there benefits in working together at the system level?

As learning studies are to be designed as integral part of the research they should be set up within the initiatives, as the PMRF also indicates. When it comes to accountability studies, there are some parts that need to be developed within the initiative and other parts where there is opportunities for cooperation at the system level.

To estimate long-term impacts we need to know who is being reached by CGIAR innovations & policy advise. In documenting reach, there is economies of scale and working together at the system level can be beneficial. However, while document reach is an important step in estimating impact, reach itself is not impact. The necessary complement to documenting reach is to causally estimate the benefit of the innovation to the beneficiaries. Such estimates require rigorous methods that can be experimental or quasi-experimental, building on documenting rollout the innovation (what happened where, when, why and with whom) or on the tracing of policy influence/engagement activities. This needs to be documented from the start and happen within the initiative, since that’s where this information exists and since it requires close collaboration with research teams.

How can we learn from the SPIA country level approach and the Ethiopia strategic study?

The Ethiopia study documents reach, showing which innovations have reached which types of farmers at the national level. Reach is a necessary condition for impact, but it is not the same as impact. This needs to be complemented by rigorously designed studies to obtain causal impact estimates (see previous question).