In 2022-23, IAES/Evaluation contracted an external evaluation team to conduct an independent evaluation to assess CGIAR’s Gender Equality, Youth, and Social Inclusion Impact Area Platform (GENDER Platform). The team was led by an evaluation expert and included four specialized technical experts. The narrative shared here demonstrates how having an explicit and credible evaluation approach (and not just a research design) ensures a transparent empirical process that supports a useful evaluation.

All evaluations should be based on a sound evaluation theory to structure the process and ensure scientific rigor. Evaluation theory informs the methods chosen, the decisions made in the field, how analysis is done and, importantly, how an intervention is valued. A good evaluator ensures that empirical data is gathered from multiple perspectives at different levels, triangulates that data to answer each evaluation question, and makes sense of the findings. When working with CGIAR, an evaluative process is further aligned to CGIAR’s Evaluation Framework which details standards and principles for conduct of evaluations.

In order to share the GENDER Platform evaluation story, I will narrate the story from my perspective as the Evaluation Team Lead.

The Scene

The GENDER Platform has been the internal mechanism that supports CGIAR to close the gender gap. Several key stakeholders requested that the GENDER Platform be evaluated, to meet three objectives: (1) to assess the GENDER Platform’s progress; (2) to document lessons and best practices that can be used to inform other impact area platforms; and (3) to provide forward-looking recommendations for the Platform. The evaluation resulted in 11 recommendations, all of which were actively engaged with, and to a large extent, accepted by the GENDER Platform and the wider CGIAR management team (see Management Response).

The interesting part of the story that I want to share in this blog, is what happened behind the scenes that led to useful recommendations. So, join me on a short journey that provides a glimpse into how the evaluation process produced useful recommendations.

Behind the Scenes of a Useful Evaluation

Engagement, participation, and collaboration. These were the three key concepts that informed the feminist-led evaluation approach. When I say engagement, participation, and collaboration, I mean reiterative, intensive, thought-provoking facilitated processes that took time, patience, effective communication, and negotiation. These processes involved multiple stakeholders throughout the entire process, namely IAES, the GENDER Platform, and the various donors, whose needs and values, considerations, and technical input were visible in the evaluative process, and can be recognized in the multiple iterations of the Inception and final Evaluation Reports.

My team and I sought to hear multiple voices, and actively engaged with these ideas, perspectives, and insights to inform the evaluation. What did that look like in practice?  Here are two examples from behind the scenes.

  • Evaluation scope, purpose and questions. Key evaluation users developed, informed and validated the evaluation scope and evaluation questions. How did that happen? Through focused, meticulous email conversations, facilitated Zoom calls and in-person discussions.  The process provided detailed insight into what key stakeholders needed from the evaluation which then informed the evaluation scope, questions and sub-questions resulting in an evaluation that addressed the users’ key areas of interest, not the evaluators.

  • Credible data, credible evidence, and credible evaluations. Formal and informal discussions clarified what these key stakeholders considered to be credible data, credible evidence, and credible evaluation. If an evaluation lacks credibility in any of these areas, it is unlikely to be used. What did these influential groups cherish?  Transparency, fairness, various feminist values, empirical data, practical use, collaboration, and participation. After careful consideration, I selected and intertwined four evaluation theories to guide the evaluative process that reflected the stakeholder’s values and needs: feminist evaluation, participatory evaluation, utilization-focused evaluation and theory-driven evaluation. Each theory brought unique (and sometimes overlapping) aspects with explicit thinking and guidance that clearly demonstrated a reflection of the key stakeholders’ values. Applying these approaches led to credibility at all three levels, for all key stakeholders.

You may be wondering about the evaluation theories. Let me provide a bit more detail. All evaluations need to be guided by an evaluation approach; otherwise, it’s just a research design with some personal opinions tagged on. CGIAR underscores this in its Evaluation Framework. Thus, selecting an evaluation approach, or approaches, is critical; the evaluation approach is what makes the process transparent. For example, the approach guides technical and methodological decisions and clarifies how results will be valued.

Let’s take a brief look at the evaluation approaches selected (and then intertwined) based on their credibility with the key stakeholders and appropriateness to the Platform’s evaluation questions and context.

  • Feminist evaluation (FE), as the main leading approach, encapsulates various feminist values that guided the entire process and is discussed in more detail below. The evaluand, which was the GENDER Platform, epitomizes feminist values. Therefore, leading with a feminist design was considered common sense. A feminist approach engages with power dynamics, ensures that data is gathered in a sensitive, culturally appropriate manner and emphasizes the need to value findings from multiple viewpoints, all of which resonate with the Platform’s guiding values.

  • Participatory evaluation ensures the collaborative process. This approach ensured that the process explicitly engaged with key user groups mainly concerning evaluation question development, clarifying sampling strategies and data collection methods, analyzing data, and developing recommendations.

  • Utilization-focused evaluation supports that the evaluation process and evaluation findings are useful. This approach laid out clear steps that influenced how people were engaged and placed a constant focus on ensuring use.

  • Theory-driven evaluation provides an explicit data collection framework rooted in the GENDER Platform’s theory of change. The evaluation explored and collected data that tested the Platform’s theory of change. 

The adaptation and the interweaving of their differences and similarities resulted in a strong and unique evaluative approach specifically designed for the GENDER Platform evaluation.

The approach that received the most questions-FE

The FE approach received the most questions from key stakeholders and from external and IAES peer reviewers. I was often asked three routine questions, which I share below with my responses. 

  • There are different kinds of feminism. What type is FE grounded in? FE is not grounded in any one particular feminism, rather it is grounded in three common feminist beliefs: (1) there should be equity amongst humans; (2) gender inequity leads to social injustice; and (3) gender-based inequalities are systematic and structural.  

  • Do you need to be a feminist to implement FE? To use FE, an evaluator does not need to be a feminist. Rather, they need to identify that at least one of the above three core tenets is appropriate and useful for a particular evaluation context. For instance, one or more of the beliefs could inform how and what data are collected, and from whom, which would then address the evaluation questions at hand.   

In a nutshell, what is a FE approach?

A FE approach values the process as much as the findings; encourages reflective, empowering, collaborative, and participatory processes that actively support social justice agendas; and provides a platform for women’s voices. However, I specifically applied Feminist Principles Evaluation (FPE) in the Platform’s evaluation.

What is a FPE?

The FPE approach provides a specific, concrete, and practical action-orientated approach to implement FE, which proved to be useful for the GENDER Platform evaluation (See Annex 1 for detailed methodology). This new approach, which was further informed by CGIAR's GENDER Platform evaluation process, builds on more general FE thinking and addresses gender inequity through new guiding, useful, inspiring, developmental, and evaluable principles. These are provided below.

Overarching Principle

A Feminist Principles Evaluation is guided by one overarching principle: use evaluation to support transformational change for women that paves the way for gender equality. This value guides the entire evaluation and is supported by nine operational principles that give guidance for how to implement Feminist Principles Evaluation (FPE).

Principle 1: Co-create the evaluation.  Engage diverse women in the co-creation of the evaluation.

Principle 2: Attend to gender inequities in all aspects of the evaluation.  Ensure that gender inequities are identified and engaged with throughout the evaluative process.

Principle 3: Include diverse perspectives.  Recognize women are not monolithic or homogenous in their experiences or perspectives and include these diverse perspectives in the evaluative process.

Principle 4: Strengthen relevant skills and knowledge and ensure co-learning. Support diverse women’s participation in the evaluative process by identifying and addressing germane learning needs. Amplify learning and co-learning throughout the evaluative process.

Principle 5: Co-identify the source(s) of power dynamics that support inequities and work to reduce those inequities through the evaluation process. Gender inequities are fundamentally about power imbalances.  Because FPE brings a political commitment to an evaluation process to support transformational change for women, identify and be prepared to explicitly engage with power dynamics. 

Principle 6: Use a systems perspective to engage with systemic and structural gender inequalities.  Systems approaches bring practical ways to explore multiple suppressive systems (Meadows, 2001). FPE draws on systems thinking to elaborate on how to practically engage with system-based inequities, encouraging the evaluator to specifically identify who or what is responsible for the inequity, and clarify how it can be practically addressed.

Principle 7: Recognize that knowledge production is not value-free.  Question how facts are obtained, and query what is accepted as a fact. Understand how knowledge can replicate power imbalances that institutionalize gender discrimination.

Principle 8: Co-examine how gender inequities are intersectional. Consider what can marginalize women beyond gender, such as, but not limited to, race, class, sexual orientation, or physical ability. Gender is not always the only, or perhaps even the primary, reason that a woman may face discrimination (hooks, 2020).

Principle 9: Advocate for, and take action to reduce gender inequities.  Use evaluation processes and evidence to reduce gender inequities.  Act on the evaluation’s empirical data. Advocate for empirical evaluation evidence to be used to drive and push society into a better place for women.

End of our Journey

We are now at the end of our journey. I shared some ideas on how to support a credible evaluation that led to the development of useful and used recommendations. I highlighted how a feminist approach, when it aligns with the key evaluation users’ values, can guide an evaluation, and how an evaluation design can be further strengthened by intermingling with other approaches. Finally, I presented a new FE approach, FPE, which further guided the Platform’s evaluation by offering nine concrete principles.


FPE is an approach developed by Donna Podems, and will be available in her book, Feminist Principles Evaluation, published by Guilford Press in 2025. An introductory chapter to the approach can be found in the Research Handbook on Program Evaluation, edited by Kathryn E. Newcomer and Steven W. Mumford, forthcoming in 2024 by Elgar Publishing. The Box above is drawn from that as of yet unpublished approach.