Date: Friday, March 6, 2026
Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.
Hello! We’re Sanskriti Thapa and Cecilia Msogoya, evaluators with FHI 360’s National Institute for Work and Learning (NIWL). NIWL designs, runs, and evaluates programs focused on education and employment to help young people find meaningful careers and contribute to their communities. Our role as evaluators is to help NIWL partners monitor and evaluate their programs in real time and make sense of the data they collect. One evaluation approach that has gained considerable momentum in recent years is the use of program-specific data visualization through interactive dashboards, particularly as a tool for fostering community engagement.
Drawing from our experience in community development settings, we’ve found that evaluation often involves complex data that can be difficult for practitioners to interpret. Traditional reporting formats—lengthy PDFs, static charts—can limit accessibility and engagement by presenting data as finished conclusions. This constrains when, where, and by whom interpretation occurs, often excluding community members who are not positioned as data specialists and evaluators. In contrast, we’ve seen that data is most powerful when shared, interpreted, and acted upon collectively. This is why we’ve embraced co-creating dynamic dashboards with our partners as a tool for timely, collaborative sense-making, so stakeholders can better understand the data and feel confident acting on the evaluation results.
Across NIWL’s portfolio, data dashboards support inclusive evaluation practices. In this blog post, we offer two main lessons for evaluators in using dashboards and interactive data visualization to strengthen partners’ real-time engagement with program data and inclusive decision-making.
1. Use consistent design choices so partners can compare across sites, understand patterns, and answer their own questions
Partners engage more with a dashboard when visuals feel easy to navigate and are aligned with the data comparisons they naturally make. During the design phase, we work with partners to understand the most important questions to them and to choose visuals that align with their needs and simplify interpretation. This can be as basic as using slicers (line charts that follow the school or grant calendar) or placing sites next to each other in a single view, so differences don’t have to be inferred from separate pages. The goal is not the chart type itself but making it easy for partners to answer the questions they routinely ask. For readers interested in examples of data visualizations that make patterns and contrasts easy to see, visit Evergreen’s Ways to Show Change Over Time and You Just Need More Chart Choices.
2. Make dashboards part of regular partner check-ins so context and explanations surface in the moment
The most meaningful shift in NIWL’s evaluation work was not the introduction of dashboards, but the practice of pulling them up at the beginning of monthly meetings or annual site visits, and using the data to guide the discussion, rather than reviewing them later as separate reports. This allowed explanations and context to surface during the conversation and be incorporated more accurately into evaluation summaries, preventing important details from being lost.
Interpretation now happens during the discussion rather than after the fact, with partners sharing responsibility for making sense of the results as they emerge instead of reacting to conclusions drawn later. For readers who want additional guidance on how dashboards support ongoing, real-time interpretation rather than end-of-cycle reporting, From Data to Improvement: Social Mechanisms as a Key to Continuous Change offers a clear explanation of how regular data-review routines and shared interpretation strengthen continuous improvement.
If you’re exploring similar approaches or want to exchange ideas about dashboard design and use, feel free to reach out to us at NIWL@fhi360.org.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.