Date: Wednesday, May 13, 2026
Hello. I’m Arthur Hernandez, and I’ll admit something: a year ago, I was skeptical that AI had much to offer seasoned evaluators when it came to reporting. I’ve been writing evaluation reports for a long time, and I wasn’t convinced a machine could meaningfully improve a process I’d spent decades refining. I was partially right, and usefully wrong.
The most valuable AI use I’ve discovered isn’t generating visualizations. It’s stress-testing how I communicate findings. I’ll draft a key findings section and paste it into an AI tool with a prompt like, “Read this as a school board member with no evaluation background. What’s confusing? What questions would you have?” The feedback isn’t perfect, but it consistently catches jargon I’ve gone blind to, unclear transitions, and assumptions about what my reader already knows. It’s like having a fresh pair of eyes at eleven o’clock at night when your report is due in the morning. Every evaluator knows that feeling.
Here’s a practical reality of modern evaluation: we need to communicate the same findings to multiple audiences. A technical appendix for the program team. An executive summary for leadership. An accessible brief for community stakeholders. AI can help you adapt a core narrative across those layers. Start with your most comprehensive version, then ask the tool to draft a plain-language summary or identify which findings to prioritize for a one-pager. You’ll absolutely need to review and revise; but it cuts significant time from what is essentially translation work, and translation work is where many of us lose steam.
I’ll end with something I feel strongly about. As evaluators, our credibility rests on transparency. If AI helped shape your visualizations, draft your narrative, or surface patterns in your data, say so. It doesn’t diminish your work; instead, it demonstrates methodological honesty. The AEA Guiding Principles for Evaluators are clear that integrity is foundational to our practice. I’ve started adding a brief methods note to my reports whenever AI tools played a meaningful role. One sentence. That’s all it takes, and it matters more than we might think as our field sets norms around these tools.
AI won’t replace the evaluator’s voice, judgment, or relationships. But used with care and candor, it can help us produce clearer, more accessible, and more timely work. After forty-some years in this field, I’ll take that.
The American Evaluation Association is hosting Data Visualization and Reporting (DVR) Week with our colleagues in the DVR Topical Interest Group. The contributions all this week to AEA365 come from DVR TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.