Date: Wednesday, August 13, 2025
Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.
Hello, my name is Ian Schwenke, MEL Advisor at Nuru.
Across global development programs, third-party evaluations are widely seen as a benchmark of credibility. But too often, these evaluations are designed and implemented without meaningful engagement from the very organizations delivering the interventions, under the assumption that local involvement would introduce bias.
In my experience, this assumption is not only flawed. It actively undermines the quality and usefulness of evaluation itself.
The belief that locally led monitoring, evaluation, and learning (MEL) systems are inherently less objective has led many donors to default to external evaluators. While third parties can play an important role, I’ve seen situations where their complete control of the process leads to serious challenges. In some cases, they subcontract enumerators unfamiliar with the setting, apply rigid tools without adapting them, or misinterpret cultural dynamics in their analysis.
I’ve observed this disconnect firsthand. When local teams are excluded, misaligned indicators, poor translation, minimal understanding of local systems, and a limited grasp of the program’s intended outcomes often result in findings that lack both relevance and accuracy.
There is growing enthusiasm for locally led development. Donors are increasingly shifting resources toward national and community-based implementers. But evaluation is often the last part of the system to be localized. Even when funding flows to local partners, MEL responsibilities are frequently handed off to outside organizations, while local teams are expected to align with pre-established indicators and external reporting structures.
This weakens both learning and ownership. It also reinforces the perception that credibility requires distance rather than context.
I have found that evaluation is most effective when local MEL teams lead key components of the process, particularly tool design, data collection, and contextual interpretation. These teams often bring the cultural fluency, trust, and local knowledge necessary to gather meaningful, high-quality data.
At the same time, I have also worked with third-party researchers who brought real value, especially when their role was focused on technical support, training, and analytical collaboration. In these situations, local teams took the lead in tailoring tools, overseeing data collection, and validating results, while external partners contributed to methodological guidance, cross-checking, and final synthesis. That balance has proven both rigorous and equitable.
Evaluation must be grounded in the communities it seeks to understand. That means trusting local organizations to lead not just in implementation, but in the measurement of their own impact. Third-party involvement should strengthen, not override, local systems. We don’t need to choose between credibility and context. We need to design evaluation approaches that value both. The question isn’t whether local MEL teams are capable of evaluating their own programs. It’s whether we’re willing to give them the authority to do so.
Ian Schwenke is the Monitoring, Evaluation, and Learning (MEL) Advisor at Nuru, where he coordinates evaluation systems across the Nuru Collective. With nearly a decade of experience living and working on the continent, his focus is on building MEL frameworks that are both methodologically sound and locally grounded. Ian holds an M.A. in Global Human Development from Georgetown University.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.