Date: Friday, May 9, 2025
We are Monica Gribben, Maggie May, and Sheila Schultz of HumRRO. We serve as external, independent evaluators of district, state, and national education programs. Currently, we are evaluating literacy and numeracy initiatives in Indiana and Alabama, respectively, as well as community school partnerships in California. Over the years we have conducted evaluations ranging from formative to summative, process to outcome, and quasi-experimental to experimental.
Our evaluations typically employ a mixed-methods approach. With a variety of methods at our disposal, we have learned a lot about how to get the most from data collection activities. In this post, we will share some of the lessons we have learned as evaluators and some hot tips for evaluating education (and other types of) programs.
Even if research questions (RQs) are included in a request for proposal (RFP), we discuss the RQs and evaluation goals with the client during the kickoff meeting. The RFP may not have been written by the client, so an early discussion provides us with an opportunity for everyone to agree on interpretations and expectations of the RQs.
Educators typically are not evaluators and although we have a background in education, we are not in the classroom implementing the programs we evaluate. Working closely with clients and partners (e.g., advisory boards, testing vendors) allows us to design data collection plans that leverage existing data and collect new data to fill gaps. Collecting new data from program interest holders (e.g., educators, students, parents) typically requires us to carefully coordinate with the client to collect data.
Hot Tip: Questions to ask about a client’s existing data:
As evaluators, we often need to educate our clients to ensure that existing data and any new data that we collect are sufficient to answer their RQs. We need to support clients to collect the types of data needed to address the RQs they want to answer.
Hot Tip: Be as specific as possible when requesting data. Instead of asking for student performance data, we specify the assessment, date of administration, grade levels, content area, and other relevant descriptions of the specific types of data we need.
Hot Tip: Make sure you can identify denominators to calculate percentages! If 100 teachers attend a professional development workshop, we can’t interpret the participation rate unless we know the number of teachers who were invited or expected to attend.
We gather documents, presentations, and other artifacts of the program to learn the terminology and understand the nuances of the program. This is especially invaluable as we talk with interest holders, code open-ended survey questions, and analyze focus group and interview discussions.
Participation is often voluntary in our evaluations. Sometimes getting an adequate response rate can be a challenge. We use several strategies to reach our target participation: ask clients to send a letter to participants to introduce the evaluation and encourage participation, minimize the burden on participants, collect data as part of program activities, send reminders, and rely on a bit of luck!
Hot Tip: Try a bit of humor to encourage responses. We used a simple knock-knock joke in a survey reminder email. While we can’t claim the joke helped boost our response rate, our response rate increased after the reminder.
The American Evaluation Association is hosting Educational Evaluation TIG Week with our colleagues in the Educational Evaluation Topical Interest Group. The contributions all this week to AEA365 come from our Ed Eval TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.