Date: Saturday, August 16, 2025
Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.
Hello, AEA365 community! We’re Liz DiLuzio, Zach Tilton, and Linda Raftree, here to share some tips and resources about using AI for data analytics.
When many of us first experimented with AI tools like ChatGPT, Claude, or Copilot, it was for small wins: rewriting text, cleaning up notes, or troubleshooting bits of code. These uses saved time, but they didn’t fundamentally change our evaluation practice.
The real value of AI comes when we use it to strengthen how we design, analyze, and communicate. That requires moving beyond surface-level tasks to strategies that make AI a true thought partner. Below are three approaches you can put into practice right now.
A common frustration with AI is that it spits out answers without showing how it got there. That makes it harder to trust or critique the output. Chain-of-thought prompting helps fix that.
Instead of asking:
“What are the main themes in this dataset?”
Try asking:
“List three possible themes in this dataset. For each, explain step by step how you identified it. Then suggest what might be missing.”
This kind of prompt generates not just an answer, but a reasoning trail. You can then evaluate whether the logic holds up and where additional human judgment is needed.
Another place evaluators get stuck is translating broad ideas into concrete measures. AI can be a powerful brainstorming partner if you guide it to iterate.
Start broad:
“Suggest five possible survey items to measure community trust.”
Then refine:
“Now adapt those items for a youth-focused program in rural settings.”
By layering prompts, you create items that are not only relevant but also tailored to context. This reduces blank-page syndrome while keeping you in control of the final design.
AI outputs often mirror the biases baked into the data it was trained on. Rather than ignoring that risk, evaluators can put AI to work in spotting problems.
After drafting a report section, try:
“Review this text for bias or deficit framing. Suggest ways to reframe it in strengths-based language.”
This doesn’t replace your own critical review, but it gives you another lens to catch language that might otherwise slip through.
These strategies illustrate a bigger point: AI is not here to replace evaluators. Instead, it can sharpen our practice if we learn to engage with it thoughtfully. By treating AI as a collaborator rather than a shortcut, we not only save time but also improve the quality, inclusivity, and relevance of our work.
The evaluation field is at a moment of transition. Those who learn to use AI responsibly will set the standard for quality and innovation in the years ahead.
Want to dig deeper? Here are a few places to start:
If these strategies spark ideas, you’ll want to join us in our upcoming workshop, Intermediate Data Analytics with AI: Digging Deeper with LLMs, a two-part virtual workshop. You can learn more and register here.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.