Date: Wednesday, March 4, 2026
Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.
Hello! I’m Daniel Fonner, and I study public sector evaluation and responsible uses of data and AI as a researcher and lecturer at Southern Methodist University. As artificial intelligence (AI) enters public sector workflows, many evaluators are understandably cautious about transparency, bias, privacy, and over-automation. In a recent report for the IBM Center for The Business of Government, Responsible AI for Public Evaluation, I examine how AI can responsibly support evaluation and performance auditing without replacing human judgment.
Across government, AI is often deployed for prediction, automation, or efficiency gains. Evaluation, however, has largely been left out of the AI conversation with a recent report from the OECD acknowledging that “[AI’s] use within government for policy evaluation has been limited and progressed slower than in other functions [across government agencies].”
At the same time, evaluators face growing constraints: limited staff capacity, increasing data volumes, and heightened expectations for transparency. This gap creates a risk. If AI is adopted without evaluation expertise, it may undermine accountability. But if evaluators ignore AI entirely, opportunities to strengthen learning and evidence use may be lost.
To address this tension, the report introduces Responsible AI for Evaluation (RAI-Ev): a framework that uses AI as a post hoc analytical tool to examine past human decisions and programs.
RAI-Ev is designed to align with evaluation standards many of us already use. It emphasizes:
“What can AI reveal about how human decisions aligned (or failed to align) with stated program goals?”
RAI-Ev follows five steps that evaluators will find familiar:
In the report, I illustrate this framework using a COVID-relief grant program, where AI was used to examine whether funding decisions aligned with stated equity and access goals without replacing reviewers or altering award decisions.
If you are curious about AI but cautious about its role in evaluation, consider these starting points:
Responsible AI does not mean “more automation.” It means using new tools in ways that strengthen evaluation’s core mission: learning, accountability, and public trust. For evaluators, this is an opportunity not to become data scientists but to help shape how AI is used responsibly for evaluative purposes, especially in the public sector.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.