Frequently Asked Questions (FAQs)

Welcome to the FAQ page for Evaluation and Evaluability Assessments (EA) at CGIAR. This resource aims to provide comprehensive answers to common questions about conducting evaluability assessments.

For queries and feedback contact IAES-Evaluations@cgiar.org.

1. What is the difference between evaluation and research?

The most widely accepted definition of evaluation in the evaluation field is by theorist Michael Scriven (1991), who emphasized the role of values in evaluation: “Evaluation is the systematic process to determine merit, worth, value or significance” (p. 139). In contrast to research, evaluation questions originate with evaluation users or key stakeholders to the evaluation. This involves multiple stakeholder groups, including those who will make decisions based on the evaluation results. 

As Daniel Stufflebeam (1985) succinctly states, drawing attention to a difference between research and evaluation, “The purpose of evaluation is to improve, not to prove.” Hence, evaluations provide evaluative evidence and actionable recommendations to support improved policies and interventions. Scriven (1991) recognizes the use of criteria, standards and values in noting, “what distinguishes evaluation from other applied research is at most that it leads to evaluative conclusions. To get to them requires identifying standards and performance data, and the integration of the two” (p. 144).

2. How does CGIAR define and approach evaluation?

CGIAR’s Evaluation Policy (2022) defines evaluation as “a systematic and objective assessment of the design, implementation, and results of an ongoing or completed project, program, initiative, policy, or operational modality in CGIAR, in line with this Evaluation Policy and standards and principles under the Evaluation Framework” (p. 3).  

Those types of evaluation covered under CGIAR’s Evaluation Policy include process and performance evaluations. In CGIAR context, Impact assessments or evaluations, would fall under other types of assessments that are considered complementary to process/performance evaluation and provide inputs to it, but are not subject to the CGIAR’s Evaluation Policy, for example those funded and/or conducted by the Standing Panel on Impact Assessment (SPIA); performance monitoring undertaken by management; and reports and evidence from other internal and external assurance functions.

3. What is the value of evaluation for scientists and researchers at CGIAR?

At CGIAR, scientists often conduct research within the context of a program that aims to use the results of that research to affect change. Evaluation may be of interest to assess the value of the research results or to determine the value of the outcomes to which the research results contribute as part of a larger programmatic context. This may involve examining use of the research results for purposes of uptake, application in different contexts by different actors, and realization to what effect. Evaluation may be used to determine to what extent the change realized is “good” and for whom.

4. What are evaluability assessments?

An EA is a systematic examination of a program's readiness to be evaluated. An EA is the first step to ensuring that an evaluation will be cost-effective, will result in a useful learning process, and will produce findings that inform decisions and improve program performance. It helps clarify an evaluation’s feasibility, scope, and value for money. 

Theorist Michael Scriven (1991), when describing evaluability as “the first commandment in accountability” (p. 138), noted it as an obligation and a responsibility. “It is not enough that one be able to explain how one spent the money, but it is also expected that one be able to justify this in terms of the achieved results” (p. 1). He noted low evaluability has a high price in terms of the cost borne. 

At CGIAR, “evaluability” is one of the standards of the Evaluation Framework, and in the Evaluability Assessment Guidelines. “Evaluability refers to the extent to which an intervention can be evaluated in a reliable and credible fashion; the concept is central to a culture of results. A strong focus on evaluability at the design stage facilitates overall ‘measurability’, monitoring and subsequent evaluation.” Scriven’s reference to evaluability as accountability supports CGIAR’s definition of ensuring a strong culture of results. 

5. How do EAs differ from other evaluative exercises?

There are numerous kinds of evaluative exercises that focus on different types of “evaluands”, what Scriven (1991) refers to as “a generic term for whatever is being evaluated – person, performance, program, proposal, product, possibility, and so on” (p. 139). There are also many kinds of evaluations that vary in their approach, timing, and purpose. 

Formative evaluation, for example, is an iterative process that focuses on providing feedback and improving the development of a program or intervention. It may be ongoing or conducted at the midpoint of a program’s lifecycle (PRMS). Summative evaluations are held at the end of a program with the purpose of exploring what outcomes were achieved, and what significance and value those outcomes had for beneficiaries (Big Data in Agriculture Platform). Ex-ante evaluation occurs well after a program ended, with the purpose of identifying long-term outcomes and sustainability. Meta-evaluations, or evaluation of evaluation, is what Scriven (1991) refers to as of great importance to the evaluation field from both a scientific and moral perspective. Asking, “Who evaluates the evaluator?” reminds us that evaluation is a self-referential activity.

While an EA can serve formative purposes, an EA is not an evaluation; it does not, for example, serve outcome, impact, or summative evaluation purposes (Walser, 2015). It is important for stakeholders to understand this when initiating an EA and disseminating its findings. An EA prepares programs for evaluation to the extent that program goals are agreed on, information needs are well-defined, evaluation data are obtainable, and intended users are willing to use evaluative information (Wholey, 1979, 2004, 2015).

6. How will I benefit from an EA? Why should I devote resources to an EA?

An EA is an effective tool for managers at CGIAR to maximize the potential of an evaluation (see a recent CGIAR blog). Both the process of undertaking an EA and the results it provides will be of benefit to program managers, staff, and other stakeholders. 

EAs will help prepare for successful evaluations that are aligned with management and stakeholder (including donor) expectations. Program managers will learn about the extent of readiness for evaluation, the extent to which evaluation M&E plans are adequate and where there are areas for improvement and receive an in-depth analysis of the program’s logic.  The desired products of an EA should have a thorough description of the program, the key questions to be addressed by the evaluation, and an evaluation plan based on stakeholder participation and inputs. What these products look like is a point of discussion at the beginning of an EA in determining specific purpose, objectives, scope, timeline, and nature of the products produced.

7. Why are EAs useful to commissioners of evaluations?

Commissioners are interested in cost-effective evaluations that are credible and offer results that support both accountability in programming and learning for all stakeholders. In supporting cost-effectiveness, an EA will help determine which evaluation type is warranted, an approach and design, appropriate and targeted areas of inquiry, and the availability of evidence used to make evaluative judgments. In short, an EA provides valuable information on both how and when to conduct an evaluation, ensures alignment with the Evaluation Framework’s standards, and helps to maximize the value of resources invested. Commissioners support the use of EAs since they help plan for successful evaluations. 

8. What is the value-added of EAs for stakeholder engagement and learning?

The process of undertaking an EA will help strengthen stakeholder understanding of a program, align expectations of the donor and other stakeholders for evaluation, and increase levels of capacity to both participate and use its results. EA results can also support planning for stakeholder engagement, ensuring greater participation not just in evaluation but in the program itself over the long-term.

9. How can an evaluability assessment enable my program to achieve long-term outcomes and impact?

An EA aims to support programs to better establish a program logic and a means for monitoring and evaluating outcomes and long-term impact. EAs can ensure a holistic and thorough approach to appropriate and suitable M&E for the desired goals and program logic EAs focus on the presence and quality of M&E plans, and their design to facilitate evaluations. EAs support an effective MEL approach, enabling programs and its stakeholders to achieve and realize its long-term desired impact.

10. When is the best time to conduct EAs?

EAs are quite effective when implemented at the design stage of an intervention, although they can be done throughout the project cycle prior to the conduct of an evaluation. They can assist in the development of sound program logic, enabling stakeholders to understand underlying assumptions and clarity in roles in contributing toward desired goals. They can also support the development of monitoring and evaluation plans.  

11. What kind of resources or documents do I need to share as a program manager to benefit from an evaluability assessment?

The following documents should be collected, if available. Items in bold indicate the minimum, core required list.

  • Proposal or strategy documents.
  • ToC (at all intervention levels, and including historic and current versions, if updated annually).
  • Results Framework or other documents with articulated inputs, activities and outputs, desired outcomes and impacts and indicators.
  • Plan of Work and Budget, Annual (and other) reports.
  • Project lists (with related documentation).
  • Contact lists for internal and external stakeholders and key informants.
  • Standard indicator description sheets.
  • ISDC reviews, funder review/comments on the proposal.
  • Previous independent or other evaluations, studies, and impact assessments.
  • Access to key databases and data sources with relevant information, e.g., monitoring data and staff lists, access to Web of Science for bibliometrics, dashboards.
  • Internal audit or other assurance-related documents.
  • Governance-related information (Independent advisory bodies, structure, ToRs, composition, meeting minutes).

12. What kind of inputs should I make as a program manager for a successful evaluability assessment?

Program manager inputs, participation and leadership are critical to the success of an evaluability assessment at all phases, including design, implementation, learning, and use of results. Like all evaluations, manager participation and buy-in supports the development of EA design, the EA team’s access to key stakeholders and extant documentation during the data collection phase, and the participation and availability of stakeholders during inception and validation exercises. In addition, managerial participation in reporting of EA results to governance bodies facilitates their use. Activities are coordinated with designated professionals, an evaluation manager, the IAES (for independent evaluations), the evaluator(s), and evaluation users. 

Throughout the process, the nature and timing of management engagement will vary. However, through every evaluation phase management leadership and engagement are key.  Management can lead by putting forward learning questions and providing feedback, being supportive, responding to recommendations, and ensuring actions are implemented. 

13. Where can I learn more about evaluability assessments?

Davies, R. (2012). A bibliography on evaluability assessment. Monitoring and Evaluation News. https://mande.co.uk/2012/lists/evaluability-assessments-bibliography/a-bibliography-on-evaluability-assessment/

Davies, R. (2013). Planning Evaluability Assessments: a synthesis of the literature with recommendations. (Working Paper 40). Cambridge, UK: Department for International Development.

Independent Advisory and Evaluation Service (IAES). (2022). CGIAR Evaluation Guidelines: Conducting and using evaluability assessments within CGIAR. Rome: Independent Advisory and Evaluation Service.

Davies, R. (2021). Evaluability assessment: An assessment of the extent to which an intervention can be evaluated in a reliable and credible fashionBetter Evaluation. https://www.betterevaluation.org/methods-approaches/themes/evaluability-assessment

Lam, S. & Skinner, K (2021). The use of evaluability assessments in improving future evaluations: A scoping review of 10 years of literature (2008-2018). American Journal of Evaluation, 42(4), 523-540.

Peersman, G., Guijt, I., and Pasanen, T. (2015). Evaluability assessment for impact evaluation: guidance, checklists, and decision support. ODI. https://odi.org/en/publications/evaluability-assessment-for-impact-evaluation-guidance-checklists-and-decision-support/ 

Peersman, G., Guijt, I., and Pasanen, T. (2015). Evaluability Assessment for Impact Evaluation. Better Evaluation.  https://www.betterevaluation.org/tools-resources/evaluability-assessment-for-impact-evaluation 

14. Where can I learn more about program evaluation?

The following websites may be useful to learn more about program evaluation:

https://www.eval.org/About/What-is-Evaluation 

https://europeanevaluation.org 

https://www.betterevaluation.org 

And the following publications:

  • Alkin, M. C. (Ed.) (2013). Evaluation roots: A wider perspective of theorists’ views and influences (2nd ed.). Thousand Oaks, CA: Sage.
  • Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: Theories of practice. Thousand Oaks, CA: Sage.

References

  • Patton, M.Q. (2008). Utilization-focused evaluation, 4th ed. Thousand Oaks: Sage Publications.
  • Scriven, M. (1991). The evaluation thesaurus, 4th ed., Thousand Oaks: Sage Publications. 
  • Stufflebeam, D.L. & Shinkfield, A.J. (1985). Systematic evaluation: A self-instructional guide to theory and practice. Hingham, MA Kluwer-Nijoff.
  • Wanzer, D. Linnell. (2021). What is evaluation? Perspectives of how evaluation differs (or not) from research. American Journal of Evaluation, 42(I), 28-46.
  • Walser, T. M. (2015). ‘Evaluability assessment in higher education: Supporting continuous improvement, accountability, and a culture of assessment.’ Journal of Assessment and Institutional Effectiveness, 5(1): 58–77. 
  • Wholey, J. S. (1979). Evaluation: promise and performance. Urban Institute: Washington, DC.
  • Wholey, J. S. (2004). Assessing the feasibility and likely usefulness of evaluation. In K. E. Newcomer., H. P. Hatry., & J. S. Wholey. (Eds.), Handbook of practical program evaluation (2nd ed., pp. 33–62). Hoboken, NJ: Jossey-Bass.
  • Wholey, J. S. (2015). ‘Exploratory evaluation.’ In K. E. Newcomer., H. P. Hatry., & J. S. Wholey. (Eds.), Handbook of practical program evaluation (4th ed., pp. 88–107). Hoboken, NJ: Jossey-Bass.