Date: Friday, April 17, 2026
Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.
My name is Anne Shukla, and I’m a Sr. Measurement and Evaluation Officer at Allen Family Philanthropies (AFP). Throughout my career as an evaluation consultant, I frequently encountered organizations stuck measuring specific outputs or survey questions because a funder required them. These mandated metrics often failed to capture what was truly meaningful to the organization or reflect their actual progress. When I transitioned to foundation work, I was determined not to repeat these mistakes. I’ve sought reporting structures that meet grantees where they are without forcing specific data collection strategies, while still enabling AFP to aggregate insights into the success of our strategies.
Enter rubrics. For AFP’s Mobilizing Young Leaders awards, we developed a rubrics-based measurement approach for grantees to demonstrate progress towards shared outcomes. Rather than requiring all grantees to collect the same data, rubrics provide an outcomes framework that honors the diversity of approaches, populations, and settings our grantees represent. Our rubrics measure three outcomes broken down into three aspects for each. Grantees report where their group of participants falls, providing whatever evidence supports their ratings (observations, surveys, testimonials, etc.). This stands in contrast to reporting that mandates specific survey questions or outputs, approaches that can yield comparable data but can also force grantees to collect information that doesn’t align with their program model or stage of development. A nascent grassroots organization and a well-established youth leadership program may each be doing excellent work but requiring them to measure indicators in the same way may result in data that’s meaningless, burdensome, or both.
Creating rubrics requires input from those who know the work best. We engaged youth development professionals, existing grantees, and young people in our development process. Youth joined focus groups to review proposed language, while grantee field-testers provided feedback on clarity and feasibility. This collaborative approach ensured our rubrics reflected real-world contexts and resonated with the people who would use them.
The real power of rubrics emerges when they become tools for program reflection and improvement, not just compliance. We’re building this in by using rubric reflections to drive discussion during quarterly grantee convenings, creating space for grantees to share evidence, discuss challenges, and learn from each other. This requires ongoing feedback loops and commitment from AFP to use the rubrics as learning tools. Early feedback from grantees suggests they appreciate the flexibility and opportunity to think more critically about what progress looks like in their specific context; however, there is a learning curve to overcome, including challenges in authentically gauging youth progress.
AFP is committed to learning alongside our grantees to support positive outcomes for youth. Rubrics aren’t a perfect solution, but they represent a meaningful step toward respecting organizational autonomy while maintaining accountability for learning together. If you’re exploring alternatives to output-based reporting, rubrics might be worth considering.
Do you have experience with rubrics or questions about this approach? Share your thoughts in the comments below. And if you’re planning to attend the AEA conference this year, we would love to speak with you there!
Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.