Date: Wednesday, November 5, 2025
Hi! I’m Lina Cherfas (she/her) and my consulting practice is A Good Question. I support mission-driven organizations through evaluation, planning, and facilitation, primarily in the public health and advocacy spaces.
One of the most interesting parts of advocacy and policy evaluation is thinking about medium- and long-term power building outcomes that progressive campaigns are trying to achieve, in addition to their target policy change (or successful defense against a harmful policy). My collaborators and I recently worked with two statewide coalitions to assess their campaigns in real-time, as they were happening. From the very beginning, the evaluation approach was co-designed around the coalitions’ learning priorities. We wanted the resulting analysis to be useful to them as they continue to move together in future campaigns and to build people power.
To kickstart the process, we facilitated a day-long, in-person workshop with each coalition, bringing together organizers from across the partner organizations to co-design the evaluation plan for the project. Most of the people in the room had never designed an evaluation before, nor had much experience with evaluation. So, we worked to facilitate the workshops in a way that would engage people with the evaluation process, foster relationships, and generate a meaningful evaluation plan.
The co-design juices really flowed when we got to the activity where we asked participants to generate their own learning questions around themes they had identified earlier in the day. They came up with hundreds(!!) of questions at all levels of specificity, from “How many new and total volunteers were engaged?” to “Is leadership shared in the coalition? How/what does that look like?” to “Who is not here, who should be?” to “How do we build community power? What was the result (impact) of that power? Were we able to wield power in other places/ways as a result?”
Coming up with their own theories of change, generating their own questions, and then thinking through who can answer them helped the groups connect with the purpose of the evaluation and see its value both for demonstrating their work to funders and for learning to inform future efforts. The resulting evaluation plan was better than what we (the evaluators) could have come up with on our own. And, these co-design workshops helped sustain participation in the evaluation process through the following months, even as the organizers’ schedules filled up and other priorities emerged.
The American Evaluation Association is hosting APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to AEA365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.