Abstract
This Technical Note provides grounded, forward-looking guidance for evaluators seeking to explore the role of Artificial Intelligence—particularly generative AI—throughout the evaluation lifecycle. It is designed to support responsible, adaptive, and ethical engagement with AI tools in evaluation practice.
- Learn how AI can enhance evaluation design, analysis, and communication in creative, human-centered ways.
- Navigate ethical and operational boundaries collaboratively to ensure accountability, transparency, and inclusion.
- Experiment with purpose using curated tools, practical prompts, and real-world reflections that foster critical thinking and continuous learning.
The CGIAR 2030 Research and Innovation Strategy commits to organizational change through seven ways of working—including “Making the digital revolution central to our way of working.” In that context, AI introduces both opportunities and risks to evaluation practice. Guided by the CGIAR-wide Evaluation Framework, integrating AI tools requires a governance approach that balances innovation with ethical responsibility—ensuring transparency, fairness, accountability, and inclusivity. Whether you're new to AI or already experimenting, this companion helps you stay thoughtful, responsive, and human-centered in a fast-evolving digital landscape.
Citation
Cekova, D., Corsetti L., Ferretti, S. and Vaca, S. (2025). Considerations and Practical Applications for using Artificial Intelligence (AI) in Evaluations. Technical Note. CGIAR Independent Advisory and Evaluation Service (IAES). Rome: IAES Evaluation Function. https://iaes.cgiar.org/evaluation