Millions of households reached with biofortified crops. Progress on child growth and nutrition unlocked. That’s the story the modelled reach data and subsequent evaluations seemed to tell. But the gap between projected reach and observed reach tells a different story: one with real consequences and opportunity costs.
A recent technical note by SPIA compared two approaches to estimating the reach of biofortified crops (orange-fleshed sweet potato and high-iron beans) in rural Uganda. The difference between what was modelled and what was actually observed through nationally representative surveys and genetic testing (via DNA fingerprinting) is not minor. In some cases, model-based reach estimates were up to 14 times higher than observed estimates.
This divergence matters, not just for what it reveals about a specific model, but because of its broader implications for how we monitor the reach and adoption of agricultural innovations. Models like the one developed by HarvestPlus were built with good intentions. In earlier years, they likely offered reasonable estimates, especially when NGO partners were actively distributing seeds and vines and closely engaged in dissemination efforts.
However, as circumstances shifted with funding, reduced field oversight, rapid expansion to new areas, and increased difficulty in identifying planting material, the assumptions underpinning the model remained intact. While certain parameters were updated, the model’s foundational premises went largely unexamined.
The result is a form of monitoring drift: a growing disconnect between what models report and what happens on the ground. In this case, the disconnect has real costs in terms of missed opportunities to improve nutrition, delayed course corrections, and continued investments based on overstated reach.
This drift raises broader questions for MELIA systems across CGIAR and beyond. When is a model or a monitoring tool useful, and when does it become too detached from reality to inform decisions? How do we balance the need for efficient, scalable monitoring tools with the need to periodically validate them against independent data? The argument here is not to discredit monitoring models. In fact, they remain essential in tracking progress where real-time data is scarce. But models need to be grounded and regularly updated, especially when they continue delivering good news despite signals to the contrary.
In the Uganda case, we now have access to a larger toolkit: nationally representative surveys, DNA fingerprinting, and other methods that can provide more reliable insights. While these tools improve accuracy, they also come at a cost that limits scalability. The challenge, then, is not a lack of methods. Rather, it is to find a cost-effective combination of ground-truthing and model-based estimation that can work at scale. This kind of solution warrants thoughtful investment in monitoring processes, and a willingness to test, iterate and fail fast.
The case of biofortified crops in Uganda reminds us that the opportunity costs of monitoring drift are far from abstract. A rigorous impact study has shown that orange-fleshed sweet potato can lead to impressive gains in child growth outcomes. Imagine the aggregate gains in health that could have been achieved if millions of households had actually been reached or the benefits that might have been unlocked had the drop in reach been detected early and prompted course correction. When monitoring systems lose touch with on-ground realities, we risk becoming locked into path-dependent decisions that delay learning, derail funding, and undermine the potential benefits these innovations are designed to deliver.}