Bibliometrics is the use of statistical methods to analyze books, articles, and other publications, especially regarding peer-reviewed research publications. Toward monitoring and evaluating research programs, bibliometric methods offer quantitative insights on research that are scalable and transparent. CAS/Evaluation and Science-Metrix recently co-designed a technical note on Bibliometric Analysis in CGIAR; the note makes recommendations for enhanced use of bibliometrics as one of the methods in evaluating quality of science. This blog focuses on the value of the technical note co-design process to expand the horizons of bibliometric analysis and motivate its further use.

After a competitive selection process, the Evaluation Function of the CGIAR Advisory Services Shared Secretariat (CAS/Evaluation) contracted Science-Metrix (Elsevier) as expert consultants to co-design a technical note that provides recommendations on the use of bibliometrics toward evaluating the quality of science (QoS). CAS/Evaluation leveraged Science-Metrix’s expertise; its 20-year history in the use of bibliometrics for research program evaluation; and its unbiased, external view of CGIAR research practices. Namely, this work built on Science-Metrix’s work for the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH—a bibliometric assessment of over 100 research projects, which included CGIAR work as well.

Participatory and iterative co-design helped integrate and synthesize documents and experiences to identify novel ways forward. The recommendations were co-designed through systematically combining Science-Metrix’s bibliometric expertise with input from CGIAR stakeholders and lessons from previous independent evaluative reviews of CGIAR Research Programs.

As a first step, Science-Metrix engaged in a document review of materials provided by CAS/Evaluation on CGIAR governance and transformation to One CGIAR, prior independent evaluations, and ongoing efforts to enhance data management for CGIAR research and activities. Then, jointly, the two groups conducted a focus group, interviews, and a survey with stakeholders and subject matter experts about previous use of bibliometrics and ideas for future evaluations. To ensure legacy learning, CAS/Evaluation invited selected participants of the 2015 workshop around approaches and experiences in assessing science quality. The subsequent analysis of qualitative inputs considered CGIAR context, requirements, limitations, and evolving bibliometric capabilities, as well as Science-Metrix’s knowledge of best practices, tools, and limitations.[1],[2]

Results highlighted both the validity of bibliometric methods and the challenges of accurately understanding and capturing key contextual aspects of the research. For example, many researchers involved in agricultural research for development (AR4D) in the Global South are of northern origin, complicating accurate measurement of South-North collaboration. To address the challenge, CAS/Evaluation learned that CGIAR evaluation experts could document key metadata internally to complement existing bibliometric databases using affiliation-based geographic metadata. Science-Metrix, in turn, appreciated the complexity of affiliation metadata, in which bibliometric measures of southern-northern co-publications are challenged by researchers’ migration trajectories.

In this way, Science-Metrix’s work built on CGIAR’s existing strengths in bibliometrics, particularly in terms of (1) metadata curation and (2) contextual understanding. CGIAR already included mechanisms for accessing and curating data and metadata from multiple metadata sources to assess peer-reviewed publications and associated information submitted by CGIAR entities. Additionally, CGIAR’s understanding of publishing practices among researchers implementing AR4D, such as those in the Global South, provided important contextual information.

Drawing on previous work, Science-Metrix recommended additional indicators and comparative analyses to better situate CGIAR research among comparable organizations. Science-Metrix introduced CGIAR stakeholders to the breadth of available bibliometric indicators and their possible contributions to program evaluation, to address a tension between the use of bibliometrics for staff performance evaluation/review and their use for program evaluation (i.e., effectiveness, portfolio-level quality of science). Several stakeholders expected bibliometrics to contribute mainly to performance evaluations of individuals or research teams, with a focus on volume and citation impact indicators, unintentionally narrowing the scope for bibliometric assessment.

Thanks to co-design, the authors iteratively constructed and validated 17 actionable recommendations to leverage, expand, and refine CGIAR’s capabilities in a feasible way. The recommendations covered dimensions relevant to measuring QoS through bibliometric indicators in CGIAR’s theories of change. The technical note pointed out that, despite their utility, all bibliometric indicators have weaknesses when used individually. Even when used together, results are strengthened by additional evidence. Moreover, the purpose of the research—in this case, AR4D—helps define the weight of different dimensions in its assessment. Careful consideration of research context and management would allow evaluators to arrive at robust conclusions on which to build constructive recommendations toward improving the performance of scientific programs.

While the co-design process strengthened and enriched the evidence-based recommendations, greater understanding of the breadth of use and value of bibliometrics by CGIAR stakeholders, with more input from a larger number of stakeholders, would complement input from CGIAR monitoring and evaluation (M&E) experts and external subject matter experts—contributors to the technical note. CGIAR stakeholders highlighted the need for inclusion of bibliometrics in the evaluation of outcomes and impact studies (OICRs) and bibliometrics that captured policy uptake and impact. To maximize co-designed processes and products, projects could include greater involvement from the people being evaluated, including through a larger-scale qualitative component; integrate more perspectives, including those of southern researchers; and leverage a pilot bibliometrics evaluation focused on integration, where possible, of bibliometrics with qualitative indicators. In the continuing development of a Guideline based on the technical note, CAS/Evaluation will be leveraging these areas of learning about co-design in practice.

Ultimately, this co-design process created learning for both sides. Science-Metrix analysts were able to refine their assumptions, while CAS/Evaluation gained awareness of the breadth of bibliometric indicators and their potential for enhancing M&E and specifically as part of evaluating QoS. Towards wider learning in the evaluation industry, CAS/Evaluation with ICARDA MEL team, core contributor to development of the Technical note, will convene into a session on evaluating Quality of Science at the European Evaluation Society Conference in June 2022. The collaboration was further exemplified and strengthened by engagement in the EvalForward discussion (EN, FRES), as described in the news post by Science-Metrix.

 

[1] Technopolis Group, & Science-Metrix. (2020). Evaluation of the Belmont Forum: Final report. Retrieved from https://www.belmontforum.org/wp-content/uploads/2021/03/Belmont-Forum-Evaluation-Report.pdf.

[2] Pinheiro, H., Vignola-Gagné, E., & Campbell, D. (2021). A large-scale validation of the relationship between cross-disciplinary research and its uptake in policy-related documents, using the novel Overton altmetrics database. Quantitative Science Studies, 2(2), pp. 616–642. doi:10.1162/qss_a_00137.