Date: Tuesday, October 21, 2025
Hi everyone! We are Jennifer, Ann, and Willie, evaluation practitioners from Purdue University’s Evaluation and Learning Research Center with expertise and a keen interest in post-conflict development and education systems rebuilding. Today we are sharing insights from a recently published article we wrote with our Somali partners, “Ensuring Data Quality in Large International Development Projects: Tools, Strategies, and Lessons Learned Projects: Tools, Strategies, and Lessons Learned,” which offers valuable lessons for evaluators working in fragile and complex contexts.
Our education intervention evaluation in Somalia highlights the importance of grounding evaluations in the local sociopolitical context. For example, Somalia’s fragmented governance and ongoing insecurity made project-wide planning and data collection efforts challenging. Effective evaluation in fragile environments requires continuous adaptation of frameworks, timelines, and data collection strategies to respond to local dynamics rather than imposition of rigid models developed in more stable settings.
Rather than using a rigid top-down approach to data quality assurance, this article encourages evaluators to:
Collecting reliable data can be challenging – especially in fragile environments. Using a mixture of methods and data sources can improve both the validity and credibility of findings. Our paper describes how we used multiple data sources and collection methods to enable tool improvement without compromising data integrity.
The article shares several strategies, lessons learned, and tools that can help evaluators assure the quality and integrity of data collection in challenging environments. For example, we include an example data collection planning matrix designed to ensure rigor while embedding data collection flexibility and on-the-ground decision-making and a field notes template to streamline and standardize field descriptions across multiple field sites and data collectors. Evaluators and researchers can adapt these tools to enhance their own data collection efforts.
Evaluation studies in fragile contexts, like Somalia, require humility, patience, and adaptability. By approaching evaluation as both learning and system-building opportunities, we can contribute far more than a targeted final report.
Thanks for reading! You can find more details on our Somalia Evaluation on Purdue University’s Research Repository and the ELRC’s LinkedIn page. Please get in touch if you’d like to discuss this article, the study on which its based, or other evaluation topics!
The American Evaluation Association is hosting the American Journal of Evaluation (AJE). All posts this week are contributed by evaluators who work for AJE. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on theAEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org . AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.