From March 28th until April 19th, CAS/Evaluation co-hosted with the FAO Office of Evaluation a discussion on EvalForward. We hope at least some of you were able to follow it.
While overall on a broad topic, this discussion was part of a work and consultations towards development of guidelines to operationalize Quality of Science evaluation criterion in the revised CGIAR Evaluation Policy.
As many as 22 participants from a range of backgrounds, including non-evaluators and non-CGIAR affiliate, engaged by posting their contribution. Among them were CGIAR affiliates, FAO affiliates, donors, independent consultants, evaluators, researchers, university professors, social media experts and leaders of private organizations. Special thanks go to contributors who were or are members of the MEL COP, including Keith Child, Claudio Proietti and Valentina de Col. The richness of discussion by a diverse range of experts highlights an overall agreement on the importance of framing and use of context-specific evaluation criteria for contextualizing evaluations of science, technology, and innovation.
Through the discussion the following frameworks were introduced: Quality of Research for Development (QoR4D) frame of reference (directly or indirectly linked to the evaluation criteria: relevance, legitimacy, effectiveness and scientific credibility), Research Excellence Framework (REF),and the RQ+ Assessment Instrument.
The discussants agreed about the importance of using a mixed-method approach to combine both qualitative and quantitative indicators, explaining that such an approach is needed especially when evaluating relevance of the research questions and fairness of the process.
The following quantitative methods were suggested- bibliometric analysis, altmetrics and social network analysis. The use of bibliometric analysis was mentioned for: (1) evaluating science impact, i.e., impact within a scientific field, still measured best by the number of citations that an article or book chapter receives; (2) assessing the legitimacy of the research findings and the credibility of the knowledge products; (3) providing a good indication of the quality of science (QoS), since published papers have already passed a high-quality threshold as they have been peer-reviewed by experienced scientists; (4) providing an important overview of the efforts made, and the scientific outreach achieved. The use of social network analysis (SNA) of publications is suggested to explore who collaborates to these and what is their social and organizational context as a complement to bibliometric analysis, particularly for the legitimacy dimension.
However, using uniquely quantitative methods, would lead to limited picture when assessing QoS. Limitations related to bibliometrics were elaborated: (1) not all science, innovation and research products are included and properly recorded in the bibliographic databases, or not even published, hence not all products can be assessed; (2) it can take decades for results from investments in agricultural research to become visible; (3) difficulty to assess cost effectiveness, the fraction of support attributable to each funding source is not easily determined.
A need to include also qualitative methods was highlighted, including but not limited to qualitative assessments through interviews and/or surveys, and evidence synthesis and measuring the reached objectives against the Theory of Change. Some IT tools could be used for the qualitative analysis, such as ATLAS.ti, MAXQDA, NVivo, Cynefin Sensemaker, Sprockler and NarraFirma.
Qualitative assessments are limited since they require the evaluator to make subjective judgments. Furthermore, in some cases the least privileged people, that might also have the insight and knowledge that is valuable, might not have access to participatory approaches and methods.
While mixed methods seem the best solution, it was acknowledged that it would be hard to find a method that could work in all contexts. Even without standardization of methods, efforts should be made to design the evaluations so that results can be used, to the extent possible, at the institutional level, for instance for higher strategic and programmatic planning, but also at the level of those who are affected and impacted.
For further reading feel free to consult the EvalForward discussion page directly here.
Next steps: A follow-up session on approaches and methods to inform transformation in the context of evaluating quality of science will be held by the CAS Secretariat Evaluation Function lead, Ms Svetlana Negroustoueva, together with Enrico Boanaiuti (ICARDA/CGIAR), Rachel Sauvinet (FAO Office of Evaluation), Robert McLean (IDRC, Canada), and Valentina De Col (ICARDA/CGIAR), chaired by Ms Patricia Rogers (interdependent) on June 10 at the European Evaluation Society (EES) annual conference. Consult here the conference's full programme.