Date: Monday, April 6, 2026
Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.
Hey there! We’re Drs. Ben Hansen (from University of Michigan) and Jason Schoeneberger (from RTI International). We are the respective leads in our partner organizations on the Evaluation Engine (EE) project, a convenient, secure, and affordable tool to help schools use existing data to evaluate the quality and efficacy of their educational interventions.
One focus of our work is considering how we use longitudinal data systems to answer questions that matter to policymakers, practitioners, and communities. Many states have made substantial, continued investment in their longitudinal data systems (SLDS) to track students and program outcomes over time. Although having this data should, in principle, make it easier to understand whether programs and policies are working, a critical step is ensuring the data is used well and appropriately.
Since 2021, we’ve collaborated with the Texas Education Agency (TEA) to help them answer the question: Did this program make a difference? Our partnership has led to a few lessons:
These lessons learned led us to ask ourselves: What if other policymakers throughout TEA have questions about the impact their programs are having on students?
EE was designed to make use of existing data to support the generation of rigorous evidence with limited consumption of agency resources. Our respective organizations partnered to establish routines using SLDS data to construct high-quality comparison groups, run quasi-experimental analyses, and generate clear, accessible reports for non-technical users. For our work in Texas, we built EE to pre-compute match ratings to facilitate quick assembly of comparison groups that account for the natural clustering of students within schools. Thus, evaluators working with TEA that possess student identifiers can explore using EE to obtain rigorous, QED-based impact estimates that meet evidence standards, including those outlined under ESSA, while dramatically reducing the time and expertise required to conduct them. All this is accomplished while keeping private, student-level data secure.
Currently, EE is limited for use in evaluation activities of TEA-based programs and policies. Do you have a SLDS not realizing its full potential? Reach out, and let’s discuss how EE might be able to help your state.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.