Date: Friday, March 20, 2026
Hello AEA365! This is Danni Dorancy and Sabrina Comic Savic. We work with Evaluation + Learning Consulting to provide evaluation and technical assistance support to organizations across the United States.
Last year, we supported Kingsbridge Heights Community Center (KHCC) in the evaluation of two distinct initiatives: the Teen Center & College Directions Program (focused on youth development and postsecondary readiness) and a Community-Based Restorative Justice (CBRJ) initiative. Although both programs served youth, their structures, goals, and theories of change differed substantially. Evaluating them required flexibility, responsiveness, and attention to context.
We used a mixed-methods design that included youth surveys (measuring engagement, outcomes, and perceptions) and interviews/focus groups to deepen understanding of students’ lived experiences.
However, the programs were at different stages of development and had varying data capacities. This meant our evaluation plans could not be identical.
The Teen Center and College Directions programs had a more structured programming and clearer participation pathways. This allowed for a more straightforward measurement of short-term outcomes, such as academic confidence, postsecondary readiness, and engagement.
The restorative justice initiative was different. It was relationship-driven and less linear. Outcomes like trust, accountability, and conflict resolution unfold over time and do not always lend themselves to simple pre-post measurement.
For restorative justice work, process evaluation and qualitative insight were as important, if not more important, than traditional outcome metrics.
The takeaway: Evaluation design should reflect program structure, maturity, and theory of change.
Across both programs, youth voice proved to be essential.
Survey data provided trends, but interviews and focus groups added nuance. Youth shared perceptions of safety, belonging, and adult support. They described which activities felt meaningful and which felt less relevant. They identified barriers we might not have anticipated.
This qualitative insight contextualized quantitative findings, clarified why certain outcomes appeared stronger or weaker, and generated experience-based recommendations for program improvement.
Centering youth voice is often framed as an equity practice (and it is!). But it is also a methodological strength. It improves interpretation, sharpens recommendations, and increases the likelihood that findings will be used.
Not everything went smoothly. We navigated survey platform accessibility issues, overlapping survey links distributed to the same students, and internal capacity constraints. Rather than viewing these as purely logistical problems, we treated them as feedback about data systems, communication flows, and operational realities.
If you’re evaluating youth programs across varied contexts:
Youth-serving programs are dynamic, relational, and context-dependent. Our evaluation approaches should be too.
Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post so that we may enrich our community of practice. Would you like to submit an AEA365 tip? Please send a note of interest to aea365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.