Date: Saturday, November 8, 2025
Hello, I am Barbara Klugman, an activist and academic turned evaluator – since 2010, focusing on evaluating advocacy and social change efforts by funders, networks, NGOs, or community groups.
I am preoccupied by a tension in discourse and practice of evaluation. On the one hand, the importance of challenging power dynamics and prioritising voice and ownership. On the other, a view that in doing so, anything less than full participation of my client’s community partners, or grantees, in design, data gathering, sensemaking and use of evaluation is “extractive” and a problematic exercising of ‘power over’.
I want to query this ‘all or nothing’ approach based on my own experience working on evaluations with both funders and activist groups. In general, I find that my clients emphasise the importance of ‘participation’ – of staff, their constituents, or grantees (in the case of funders) as a principle underpinning the evaluation design. Yet once we start, other work always seems to trump them allocating time for the evaluation process. They are anxious about ‘asking for more’ from staff, or from communities, given how busy they are. And yet, when these groups do get involved in collectively storytelling about an advocacy process and outcomes, or in reflecting on what the data analysis we share tells them about their strategies, their hopes and concerns, they find it enormously useful.
So I find myself working in the middle ground, seldom having the opportunity to work with a client and their stakeholders in an entirely participative process, but usually finding ways to maximise what time they choose to make available.
A recent example: I asked a grantee of a funder whether I could make contact with some of their community constituents to get a deeper understanding of the intervention the grantee had described. Writing to them using google translate, I hoped one or two would respond. Instead six responded, with long descriptions – what they had done, changes they’d influenced, why it mattered to community members. Our team offered workshops to grantees who had responded positively to the funder’s invitation to participate in assessment of their strategy over time. The grantee who attended our workshops said that these organisations were thrilled to be asked about their experiences, and to have the chance to input to the evaluation of the funder. One workshop used a storytelling tool in the hope that while we could gain insights on an evaluation question from their stories, they might find the tool useful in their own work, following a principle of reciprocity. We also shared the tool.
Participants spoke about how useful our engagement with them was, on their ways of documenting, reflecting and reporting on their work. In the current discourse, asking for such inputs could easily be construed as ‘extractive’, yet it was both meaningful to them and deepened the quality of the evaluation findings. Some grantees chose not to attend, and perhaps this is the crux of the issue – that groups can decide for themselves.
My conclusion as I operate in this conundrum, is that one can endeavour to uncover the multiple ways of knowing of different stakeholders in an evaluation – an ethical principle – while doing one’s best to ensure that all participants experience their involvement as a meaningful and a valuable use of their time, even when the situation makes it impossible to be fully participative at every point of the process. If one invites as much input as groups or individuals want or are able to give, that is as much as one can do. Somewhere in this lies the importance of respecting my clients’ time priorities, even when I’d wish for more.
The American Evaluation Association is hosting APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to AEA365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.