Skip to main content

Main menu

  • Home
  • About
  • Who we are
  • News
  • Events
  • Publications
  • Search

Secondary Menu

  • Independent Science for Development CouncilISDC
    • Who we are
    • News
    • Events
    • Publications
    • Featured Projects
      • Inclusive Innovation
        • Agricultural Systems Special Issue
      • Proposal Reviews
        • 2025-30 Portfolio
        • Reform Advice
      • Foresight & Trade-Offs
        • Megatrends
      • QoR4D
      • Comparative Advantage
  • Standing Panel on Impact AssessmentSPIA
    • About
      • Who We Are
      • Our Mandate
      • Impact Assessment Focal Points
      • SPIA Affiliates Network
    • Our Work
      • Country Studies
        • Community of Practice
        • Bangladesh Study
        • Ethiopia Study
        • Uganda Study
        • Vietnam Study
      • Causal Impact Assessment
        • Call for Expressions of Interest: Accountability and Learning Impact Studies
      • Use of Evidence
      • Cross-Cutting Areas
        • Capacity Strengthening
        • Methods and Measurement
        • Guidance to IDTs
    • Resources
      • Publications
      • Blog Series on Qualitative Methods for Impact Assessment
      • SPIA-emLab Agricultural Interventions Database
    • Activities
      • News
      • Events
      • Webinars
  • Evaluation
    • Who we are
    • News
    • Events
    • Publications
    • Evaluations
      • Science Group Evaluations
      • Platform Evaluations
        • CGIAR Genebank Platform Evaluation
        • CGIAR GENDER Platform Evaluation
        • CGIAR Excellence in Breeding Platform
        • CGIAR Platform for Big Data in Agriculture
    • Framework and Policy
      • Evaluation Method Notes Resource Hub
      • Management Engagement and Response Resource Hub
      • Evaluating Quality of Science for Sustainable Development
      • Evaluability Assessments – Enhancing Pathway to Impact
      • Evaluation Guidelines
  • Independent Science for Development CouncilISDC
  • Standing Panel on Impact AssessmentSPIA
  • Evaluation
Back to IAES Main Menu

Secondary Menu

  • Who we are
  • News
  • Events
  • Publications
  • Evaluations
    • Science Group Evaluations
    • Platform Evaluations
      • CGIAR Genebank Platform Evaluation
      • CGIAR GENDER Platform Evaluation
      • CGIAR Excellence in Breeding Platform
      • CGIAR Platform for Big Data in Agriculture
  • Framework and Policy
    • Evaluation Method Notes Resource Hub
    • Management Engagement and Response Resource Hub
    • Evaluating Quality of Science for Sustainable Development
    • Evaluability Assessments – Enhancing Pathway to Impact
    • Evaluation Guidelines
Blog

Driving Change Through Evaluation: Key Insights from PCU’s Solomon Adebayo and Allison Poulos

You are here

  • Home
  • Evaluation
  • News
  • Driving Change Through Evaluation: Key Insights from PCU’s Solomon Adebayo and Allison Poulos

Learning from the IDEAS 2025 Multi-Dimensional Evaluation Conference, Italy 


We caught up with Allison Poulos, Technical Reporting and Transition Manager and Solomon Adebayo, MELIA Manager – both in the Project Coordination Unit (PCU) of CGIAR to gain insights into their experiences and key takeaways from the recent Conference on Multi-Dimensional Evaluation for Influence and Transformation in Rome (4-6 March 2025). There, Allison and Solomon joined the Evaluation Function of IAES. Read the blog Transforming Evaluation for a Sustainable Future: Learnings from the IDEAS 2025 Conference.
 

Can you tell us about your background and how you engage with MELIA within CGIAR? 

Allison: I joined CGIAR in 2007 and throughout my time, I’ve been deeply involved in program reporting. Since 2022, I’ve worked as part of the Project Coordination Unit, transitioning in April 2024 to my current role as a results-driven Technical Reporting Manager.  

For technical reporting, I rely on the Performance and Results Management System (PRMS), which the Research Initiatives used to report on their MELIA plans. PRMS has been designed to support the Performance and Results Management Framework (PRMF) for the CGIAR 2030 Research and Innovation Strategy. To close the circle, the Evaluation Policy connects evaluations with the PRMF, ensuring alignment and effectiveness, and relevant recommendations are implemented into the PRMS. Our work used and engaged with the lessons and recommendations from 2023 PRMS study by evaluation function under IAES.  

Solomon: With over 20 years of experience in monitoring, evaluation, learning, and impact assessments (MELIA), I specialize in evaluating initiatives across various sectors, from individual projects to large-scale programs.  I’ve been with CGIAR for four years and as the MELIA Manager, I help to facilitate the development of MELIA Plans for implementing research actions within CGIAR’s projects, ensuring data-driven decision-making and continuous improvement in agricultural outcomes. 

At CGIAR, I focus on setting MELIA standards and guiding various programs to align M&E practices. I help ensure effective data aggregation, enabling decision-makers to assess outcomes, adapt theories of change, and support adaptive management. I also contribute to building a community of practice to ensure consistent implementation of standards, harmonize tools, and optimize results.

Allison Poulos and Solomon Adebayo at  the IDEAS 2025 conference. Credit: IAES.


What were the most impactful sessions you attended at the IDEAS conference, and what key insights did you take away from them? 

Allison: I attended a workshop on scaling impact, where John Gargani broke down how to effectively define and measure scaling and impact. One standout takeaway was the importance of evaluating the risks associated with scaling innovations—not just the benefits. There’s a real danger in scaling poorly, as he illustrated with an example from Africa, where multiple wells were built in one area, rendering the effort ineffective. This session directly connects to what we’re doing with CGIAR’s scaling impact evaluation. It gave me valuable insights into organizing information around scaling and evaluating it properly. 

At the conference, I found the session on Assessing Evaluability of particular interest, especially the multi-agency perspectives on EAs from CGIAR, FAO and ILO.  It was insightful to see how each agency had their own approach, while with clear synergies and commitment to rigour, toward achieving similar goals. This shared concept of assessment can lead to a strong partnership and complementary efforts between organizations, and within CGIAR. [Evaluability Assessments Resource Hub]

Solomon: The most impactful for me was a workshop focused on utilization-focused evaluation, particularly in the context of systems transformation. One key takeaway was the importance of being intentional about how evaluation results would be used before the evaluation. This means considering who the users will be and how the findings can reshape systems and inform decisions. The timing of when findings are shared is equally crucial, as it must align with moments when the results can directly influence decision-making processes. 

The "From Evidence to Action" breakout session further stressed the importance of generating evidence that supports transformational change. In areas like climate change, evaluations must provide country-specific, predictive data that can guide better decision-making. Additionally, the work of colleagues at Evaluation function lead by Svetlana Negroustoueva, on evaluating science groups [portal], demonstrated how evidence can be leveraged in new CGIAR portfolio to ensure they are more impactful.

From left to right: Amy Jersild (consultant to IAES Evaluation Team), Solomon Adebayo, Allison Poulos of CGIAR’s PCU, Svetlana Negroustoueva (CGIAR IAES Evaluation Function Lead), and Carlos Tarazona (FAO’s evaluation office).


What practical steps or mechanisms, inspired by the IDEAS conference, would you recommend enhancing MELIA within CGIAR? 

Allison: The workshop on scaling helped me better understand the rationale for a forthcoming methods note on evaluating scaling in performance and process evaluations, in the context of EA of the Scaling program. One mechanism that stood out was the process to Identify potential benefits and risks of scaling, and also the criteria for judging acceptable risk. Having a better awareness around scaling impact, will help better align our work with scaling initiatives and ensure we’re not unintentionally scaling problems. 

Solomon: I thought an important lesson was the need for a holistic approach to evaluation. Instead of solving isolated problems, evaluations should consider the broader systems and networks of change, incorporating system thinking. This was emphasized through the concept of transformation theory, where interconnected theories of change work together to drive systemic transformation. 


How familiar are you with CGIAR's wide evaluation framework and policy, and how are you involved in them? 

Allison: I am well familiar with CGIAR’s evaluation framework, thanks to Svetlana Negroustoueva’s excellent work in socializing it with us, and broader MEL COP. The framework is deeply integrated into the way we operate within the evolving 2025-2030 portfolio of science programs and accelerators. For example, in my work with the Performance and Results Management System (PRMS), we draw directly from the standards under the Framework and guidelines to follow best practices and continuously improve M&E processes in CGIAR.

Solomon: I am also quite familiar with CGIAR's evaluation framework and policy, and broader work of evaluation function under IAES- responsible for being custodians of the Framework and Policy. I was part of the group under MEL COP that co-developed the evaluation framework and policy, to help shape evaluation functions across the diverse CGIAR centers worldwide. The framework and policy provide a global structure to guide evaluations across regions, ensuring consistency and alignment with CGIAR's objectives. 

Recently, I have become actively involved in the evaluability assessments [TORs for 2025 EAs], which focus on refining systems for MEL(IA) within CGIAR. In collaboration with CGIAR internal partners and Evaluation Function of IAES, my work helps to ensure these systems effectively capture and highlight the impact of CGIAR's agricultural research, driving change and improving lives and communities. To achieve this, we enhance M&E systems to be robust, so they can document and track progress at both macro and micro levels, allowing us to demonstrate the tangible results of CGIAR's work in agriculture and development. I'm deeply involved in ensuring that these frameworks and systems evolve to reflect the real-world impact and continuously improve how we monitor and evaluate outcomes. 


Have you engaged with the learnings from independent evaluations by IAES, and can you share your experience with them? 

Allison: Yes, we engage with evaluation recommendations (see CGIAR’s MR response tracker) , especially when it comes to MELIA and the theory of change. Inspired by MELIA-related summary learnings from IAES work has given us a clearer direction on why it’s essential to have a MELIA plan and a dedicated MELIA focal point within the science programs and accelerators. This feedback loop strengthens the process, ensuring that the programs are aligned with MELIA’s objectives from the start, making the evaluations more effective in driving positive change. 

Solomon: Absolutely! I have engaged with the learnings from the independent evaluations conducted by evaluation function under IAES. They produce valuable recommendations, and part of my role involves supporting the implementation of these recommendations to drive meaningful change. 

Recently, one key recommendation was to foster the use of knowledge and evidence across the organization. To apply it, we focused on creating a stronger culture of evidence-based decision-making. This involved reaching out to key stakeholders who could help increase the use of evidence within CGIAR.  As part of this effort, I’ve been involved in coordinating the MELIA Community of Practice (MELIA CoP), bringing together experts to discuss strategies for integrating evidence into decision-making processes. The goal is to ensure that the evidence we gather is not only useful but also effectively applied in shaping our strategies and actions. This is just one example of how we've engaged with IAES evaluations to improve management response systems to recommendations [MR portal].

 

How can the CGIAR MELIA community better share its work and expand its presence in the evaluation field? 

Allison: It’s essential for the MELIA community to be more visible, and a significant way to do that is through the MELIA CoP in CGIAR. Bringing together the impact assessment teams from different centers creates a network for knowledge-sharing. These scientists, who travel to various conferences and meetings, can act as ambassadors, effectively disseminating MELIA work outside CGIAR. As we continue to strengthen this community, we’ll be able to create more pathways for sharing insights and lessons learned across the broader network. One improvement would be to enhance the cataloguing of our evaluations, foresight studies, and impact assessments within CGIAR’s knowledge management system to facilitate access. By improving how we tag and organize these materials, we could increase transparency and ensure that valuable evaluation insights are available across the organization for learning and decision-making. 

Solomon: The CGIAR’s MELIA CoP needs to be more intentional about claiming space and amplifying its influence and impact for decision-making. While  evaluation function of IAES was able to host a few sessions at the IDEAS conference, we should increase our representation and visibility by publicizing our evaluation work more widely within CGIAR and at global forums, such as EvalforEarth. It’s important to encourage a greater presence for the work done by CGIAR centers at the national and reginal levels (see blog from Asia Evaluation week), showcasing their contributions to the broader evaluation and R4D community. 


Beyond conferences, collaboration with other international development partners can be an effective way to share and elevate our work. For instance, many organizations are conducting similar evaluative work, including SDG evaluations and evaluability assessments, and by partnering with them, we can better understand the value of our work and highlight our contributions in a more impactful way. Joint initiatives, co-documentation, and sharing lessons learned can create valuable synergies and broaden the reach of CGIAR’s evaluation efforts. Through these partnerships, CGIAR can both contribute to and benefit from a wider network of evaluation practitioners and users of evaluations, strengthening our role in shaping evidence-based decision-making across development sectors. 
 

Group discussion during the session The Case for Assessing Evaluability: Building Capacities for Transformative M&E Practice at the IDEAS 2025 conference. Credit: IAES.

 

Share on

Evaluation
Apr 28, 2025

Written by

  • Allison Poulos

    Technical Reporting and Transition Manager for CGIAR’s Project Coordination Unit
  • Solomon Adebayo

    Monitoring Evaluation, Learning and Impact Assessment Manager at CGIAR Project Coordination Unit

Related News

Posted on
20 Mar 2025

Transforming Evaluation for a Sustainable Future: Learnings from the IDEAS 2025 Conference 

Posted on
15 Oct 2024
by
  • Lea Corsetti
  • Marta Maria Molinari
  • Gaia Gullotta
  • Xiaoxi Lia

Young Emerging Evaluators from CGIAR Take the Stage: Research Analysts at the European Evaluation Society Conference 2024

Posted on
06 Jun 2024
by
  • Dr. Krishna Belbase
  • Lea Corsetti

Insights from Bangladesh: CGIAR's Role in Resilient Food Systems Transformation

More News

Related Publications

MELIA cover page
Evaluation Reports & Reviews
Evaluation
Issued on 2025

Summary of Learning on Monitoring, Evaluation, Learning and Impact Assessments (MELIA): Knowledge Product

cover page
Strategic & Synthesis Studies
Evaluation
Issued on 2025

Partnerships: Summary of Evaluative Learning on CGIAR’s Ways of Working

Assessments & Commentaries
Evaluation
Issued on 2025

MELIA Needs Assessment Results: Assessing the Evaluability of CGIAR’s 2025-30 Portfolio

More publications

CGIAR Independent Advisory and Evaluation Service (IAES)

Alliance of Bioversity International and CIAT
Via di San Domenico,1
00153 Rome, Italy
  • IAES@cgiar.org
  • (39-06) 61181

Follow Us

  • LinkedIn
  • Twitter
  • YouTube
JOIN OUR MAILING LIST
  • Terms and conditions
  • © CGIAR 2025

IAES provides operational support as the secretariat for the Independent Science for Development Council and the Standing Panel on Impact Assessment, and implements CGIAR’s multi-year, independent evaluation plan as approved by the CGIAR’s System Council.