Skip to main content

Main menu

  • Home
  • About
  • Who we are
  • News
  • Events
  • Publications
  • Search

Secondary Menu

  • Independent Science for Development CouncilISDC
    • Who we are
    • News
    • Events
    • Publications
    • Featured Projects
      • Inclusive Innovation
        • Agricultural Systems Special Issue
      • Proposal Reviews
        • 2025-30 Portfolio
        • Reform Advice
      • Foresight & Trade-Offs
        • Megatrends
      • QoR4D
      • Comparative Advantage
  • Standing Panel on Impact AssessmentSPIA
    • About
      • Who We Are
      • Our Mandate
      • Impact Assessment Focal Points
      • SPIA Affiliates Network
    • Our Work
      • Country Studies
        • Community of Practice
        • Bangladesh Study
        • Ethiopia Study
        • Uganda Study
        • Vietnam Study
      • Causal Impact Assessment
        • Call for Expressions of Interest: Accountability and Learning Impact Studies
      • Use of Evidence
      • Cross-Cutting Areas
        • Capacity Strengthening
        • Methods and Measurement
        • Guidance to IDTs
    • Resources
      • Publications
      • Blog Series on Qualitative Methods for Impact Assessment
      • SPIA-emLab Agricultural Interventions Database
    • Activities
      • News
      • Events
      • Webinars
  • Evaluation
    • Who we are
    • News
    • Events
    • Publications
    • Evaluations
      • Science Group Evaluations
      • Platform Evaluations
        • CGIAR Genebank Platform Evaluation
        • CGIAR GENDER Platform Evaluation
        • CGIAR Excellence in Breeding Platform
        • CGIAR Platform for Big Data in Agriculture
    • Framework and Policy
      • Evaluation Method Notes Resource Hub
      • Management Engagement and Response Resource Hub
      • Evaluating Quality of Science for Sustainable Development
      • Evaluability Assessments – Enhancing Pathway to Impact
      • Evaluation Guidelines
  • Independent Science for Development CouncilISDC
  • Standing Panel on Impact AssessmentSPIA
  • Evaluation
Back to IAES Main Menu

Secondary Menu

  • Who we are
  • News
  • Events
  • Publications
  • Evaluations
    • Science Group Evaluations
    • Platform Evaluations
      • CGIAR Genebank Platform Evaluation
      • CGIAR GENDER Platform Evaluation
      • CGIAR Excellence in Breeding Platform
      • CGIAR Platform for Big Data in Agriculture
  • Framework and Policy
    • Evaluation Method Notes Resource Hub
    • Management Engagement and Response Resource Hub
    • Evaluating Quality of Science for Sustainable Development
    • Evaluability Assessments – Enhancing Pathway to Impact
    • Evaluation Guidelines
Twenty20
Blog

Expertise Fused with Context: Strengthening Bibliometric Recommendations Through Co-Design

You are here

  • Home
  • Evaluation
  • News
  • Expertise Fused with Context: Strengthening Bibliometric Recommendations Through Co-Design

Bibliometrics is the use of statistical methods to analyze books, articles, and other publications, especially regarding peer-reviewed research publications. Toward monitoring and evaluating research programs, bibliometric methods offer quantitative insights on research that are scalable and transparent. CAS/Evaluation and Science-Metrix recently co-designed a technical note on Bibliometric Analysis in CGIAR; the note makes recommendations for enhanced use of bibliometrics as one of the methods in evaluating quality of science. This blog focuses on the value of the technical note co-design process to expand the horizons of bibliometric analysis and motivate its further use.

After a competitive selection process, the Evaluation Function of the CGIAR Advisory Services Shared Secretariat (CAS/Evaluation) contracted Science-Metrix (Elsevier) as expert consultants to co-design a technical note that provides recommendations on the use of bibliometrics toward evaluating the quality of science (QoS). CAS/Evaluation leveraged Science-Metrix’s expertise; its 20-year history in the use of bibliometrics for research program evaluation; and its unbiased, external view of CGIAR research practices. Namely, this work built on Science-Metrix’s work for the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH—a bibliometric assessment of over 100 research projects, which included CGIAR work as well.

Participatory and iterative co-design helped integrate and synthesize documents and experiences to identify novel ways forward. The recommendations were co-designed through systematically combining Science-Metrix’s bibliometric expertise with input from CGIAR stakeholders and lessons from previous independent evaluative reviews of CGIAR Research Programs.

As a first step, Science-Metrix engaged in a document review of materials provided by CAS/Evaluation on CGIAR governance and transformation to One CGIAR, prior independent evaluations, and ongoing efforts to enhance data management for CGIAR research and activities. Then, jointly, the two groups conducted a focus group, interviews, and a survey with stakeholders and subject matter experts about previous use of bibliometrics and ideas for future evaluations. To ensure legacy learning, CAS/Evaluation invited selected participants of the 2015 workshop around approaches and experiences in assessing science quality. The subsequent analysis of qualitative inputs considered CGIAR context, requirements, limitations, and evolving bibliometric capabilities, as well as Science-Metrix’s knowledge of best practices, tools, and limitations.[1],[2]

Results highlighted both the validity of bibliometric methods and the challenges of accurately understanding and capturing key contextual aspects of the research. For example, many researchers involved in agricultural research for development (AR4D) in the Global South are of northern origin, complicating accurate measurement of South-North collaboration. To address the challenge, CAS/Evaluation learned that CGIAR evaluation experts could document key metadata internally to complement existing bibliometric databases using affiliation-based geographic metadata. Science-Metrix, in turn, appreciated the complexity of affiliation metadata, in which bibliometric measures of southern-northern co-publications are challenged by researchers’ migration trajectories.

In this way, Science-Metrix’s work built on CGIAR’s existing strengths in bibliometrics, particularly in terms of (1) metadata curation and (2) contextual understanding. CGIAR already included mechanisms for accessing and curating data and metadata from multiple metadata sources to assess peer-reviewed publications and associated information submitted by CGIAR entities. Additionally, CGIAR’s understanding of publishing practices among researchers implementing AR4D, such as those in the Global South, provided important contextual information.

Drawing on previous work, Science-Metrix recommended additional indicators and comparative analyses to better situate CGIAR research among comparable organizations. Science-Metrix introduced CGIAR stakeholders to the breadth of available bibliometric indicators and their possible contributions to program evaluation, to address a tension between the use of bibliometrics for staff performance evaluation/review and their use for program evaluation (i.e., effectiveness, portfolio-level quality of science). Several stakeholders expected bibliometrics to contribute mainly to performance evaluations of individuals or research teams, with a focus on volume and citation impact indicators, unintentionally narrowing the scope for bibliometric assessment.

Thanks to co-design, the authors iteratively constructed and validated 17 actionable recommendations to leverage, expand, and refine CGIAR’s capabilities in a feasible way. The recommendations covered dimensions relevant to measuring QoS through bibliometric indicators in CGIAR’s theories of change. The technical note pointed out that, despite their utility, all bibliometric indicators have weaknesses when used individually. Even when used together, results are strengthened by additional evidence. Moreover, the purpose of the research—in this case, AR4D—helps define the weight of different dimensions in its assessment. Careful consideration of research context and management would allow evaluators to arrive at robust conclusions on which to build constructive recommendations toward improving the performance of scientific programs.

While the co-design process strengthened and enriched the evidence-based recommendations, greater understanding of the breadth of use and value of bibliometrics by CGIAR stakeholders, with more input from a larger number of stakeholders, would complement input from CGIAR monitoring and evaluation (M&E) experts and external subject matter experts—contributors to the technical note. CGIAR stakeholders highlighted the need for inclusion of bibliometrics in the evaluation of outcomes and impact studies (OICRs) and bibliometrics that captured policy uptake and impact. To maximize co-designed processes and products, projects could include greater involvement from the people being evaluated, including through a larger-scale qualitative component; integrate more perspectives, including those of southern researchers; and leverage a pilot bibliometrics evaluation focused on integration, where possible, of bibliometrics with qualitative indicators. In the continuing development of a Guideline based on the technical note, CAS/Evaluation will be leveraging these areas of learning about co-design in practice.

Ultimately, this co-design process created learning for both sides. Science-Metrix analysts were able to refine their assumptions, while CAS/Evaluation gained awareness of the breadth of bibliometric indicators and their potential for enhancing M&E and specifically as part of evaluating QoS. Towards wider learning in the evaluation industry, CAS/Evaluation with ICARDA MEL team, core contributor to development of the Technical note, will convene into a session on evaluating Quality of Science at the European Evaluation Society Conference in June 2022. The collaboration was further exemplified and strengthened by engagement in the EvalForward discussion (EN, FR, ES), as described in the news post by Science-Metrix.

 

[1] Technopolis Group, & Science-Metrix. (2020). Evaluation of the Belmont Forum: Final report. Retrieved from https://www.belmontforum.org/wp-content/uploads/2021/03/Belmont-Forum-Evaluation-Report.pdf.

[2] Pinheiro, H., Vignola-Gagné, E., & Campbell, D. (2021). A large-scale validation of the relationship between cross-disciplinary research and its uptake in policy-related documents, using the novel Overton altmetrics database. Quantitative Science Studies, 2(2), pp. 616–642. doi:10.1162/qss_a_00137.

Share on

Evaluation
May 20, 2022

Written by

  • Christina Zdawczyk

    Research Analyst, Science-Metrix (Elsevier)
  • Etienne Vignola-Gagné

    Senior Research Analyst, Science-Metrix (Elsevier)
  • Jillian Lenne

    Independent Consultant and Editor

Related News

Posted on
28 Apr 2025
by
  • Allison Poulos
  • Solomon Adebayo

Driving Change Through Evaluation: Key Insights from PCU’s Solomon Adebayo and Allison Poulos

Posted on
10 Apr 2025
by
  • Patrick Caron
  • Ibtissem Jouini

Speaking Truth to Power: The Role of Independent Evaluations and Integrated Partnership Board in Driving Meaningful Change in CGIAR

Posted on
24 Mar 2025
by
  • Cristiano Rossignoli
  • Moogdho Mahzab

Strengthening MELIA: Insights from Cristiano Rossignoli (WorldFish) & Moogdho Mahzab (IFPRI) on CGIAR Evaluations

More News

Related Publications

MELIA cover page
Evaluation Reports & Reviews
Evaluation
Issued on 2025

Summary of Learning on Monitoring, Evaluation, Learning and Impact Assessments (MELIA): Knowledge Product

Evaluation Reports & Reviews
Evaluation
Issued on 2025

Review of CGIAR Management Response System to Independent Evaluations

Technical Notes
Evaluation
Issued on 2025

Social Network Analysis For The Evaluation Of Development Interventions: A Methods Note

More publications

CGIAR Independent Advisory and Evaluation Service (IAES)

Alliance of Bioversity International and CIAT
Via di San Domenico,1
00153 Rome, Italy
  • IAES@cgiar.org
  • (39-06) 61181

Follow Us

  • LinkedIn
  • Twitter
  • YouTube
JOIN OUR MAILING LIST
  • Terms and conditions
  • © CGIAR 2025

IAES provides operational support as the secretariat for the Independent Science for Development Council and the Standing Panel on Impact Assessment, and implements CGIAR’s multi-year, independent evaluation plan as approved by the CGIAR’s System Council.