Date: Monday, February 2, 2026
Greetings! I am Dr. Eziwe Mutsikiwa, an evaluation consultant based in Zimbabwe. I am honoured to share that I attended the annual American Evaluation Association conference in Kansas City this past November as a 2025 International Travel Award winner. This award provided a unique platform to share my research on collaborative evaluation, focusing on how we can bridge the critical gap between evaluation practice and the communities we serve. My work is deeply rooted in the belief that evaluation should not just be about delivering results, but about fostering shared leadership and meaningful engagement.
One of the most significant insights from my work is that traditional evaluation practices often fail because they rely too heavily on metrics-based methods that prioritise the evaluator’s perspective. While technical rigour is essential, conventional approaches often overlook the lived experiences of beneficiaries when interpreting results. We must recognise that evaluation is not just a technical exercise; it is a social one. When evaluators exclude community voices, we risk reinforcing the very power imbalances we aim to address. Shifting to a collaborative model allows us to move from treating people as mere data points to seeing them as active knowledge holders and partners in change.
To transform communities into active participants, I recommend utilising Data Walks. A Data Walk is a participatory process where stakeholders, including community members, collectively navigate and interpret visual data such as maps, photos, and statistics. My tip for success is to co-design the walk with the community from the very beginning. Evaluators should not just present the data, but also involve local stakeholders in selecting which visualisations matter most and identifying the community spaces where the walk should occur. In my research involving educational initiatives in Uganda, this integrated approach led to a substantial increase in community ownership, as parents and local leaders co-created the insights that eventually led to policy adjustments.
Rad Resources: The Theory, Context, Characteristics, and Methodology (TCCM) Framework and Narrative Methods
Evaluators looking for a structured way to implement these ideas are encouraged to utilise the TCCM Framework, which stands for Theory, Context, Characteristics, and Methodology. This innovative, Afrocentric framework, guided by systematic review principles established by Paul and Rosado-Serrano (2019) https://doi.org/10.1016/j.jbusres.2021.05.005, provides the parameters needed to bring together evaluator expertise and community voices. The TCCM framework has been widely adopted across research disciplines for conducting rigorous, context-sensitive literature reviews and can be powerfully applied to participatory evaluation. By using the TCCM model, evaluators can successfully merge Data Walks with a Narrative Methods structured story.
This innovative, Afrocentric framework, guided by systematic review principle by established by Paul and Rosado-Serrano (2019), provides the parameters needed to bring together evaluator expertise and community voices. The TCCM framework has been widely adopted across research disciplines for conducting rigorous, context-sensitive literature reviews and can be powerfully applied to participatory evaluation.
The American Evaluation Association is hosting International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to AEA365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.