Date: Thursday, August 21, 2025
Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.
Greetings, AEA365 community. We’re Elizabeth DiLuzio (consultant and trainer in evaluation and data analytics), David Fetterman (past president of the American Evaluation Association and president of Fetterman & Associates), and Michael Quinn Patton (founder and director of Utilization-Focused Evaluation and Blue Marble Evaluation).
The evaluation field is evolving rapidly. Political polarization, rapidly advancing technology, and global crises are reshaping the contexts in which we work. For new and emerging evaluators, this creates both uncertainty and opportunity: Which skills matter most now? How do we adapt without losing sight of purpose?
In November, we’ll be presenting at AEA’s annual conference to take on these questions. Together, we’ll share strategies from our forthcoming co-authored book designed to help evaluators stay relevant, effective, and hopeful in times of disruption.
Here’s a preview of three practical actions you can take today.
Evaluating transformation requires transforming evaluation itself. One of the most important shifts is from project thinking to systems thinking. Programs don’t operate in isolation. They intersect with health, education, climate, economic, and political systems. Ignoring those interconnections leads to narrow, misleading conclusions.
Here’s a simple way to start:
This exercise builds evaluators’ capacity to see interdependencies, an essential skill in a world defined by complexity and turbulence.
AI is already transforming evaluation practice, including voice-translation tools that bridge language gaps, platforms that summarize volumes of data and transform documents into entertaining podcasts in order to generate more accessible reporting. These technologies can expand participation and break down barriers if we use them responsibly.
Fetterman emphasizes that AI should not replace evaluators’ judgment but instead enhance community empowerment. For example, communities in India have used empowerment evaluation alongside digital tools to combat tuberculosis, equipping residents to track care and communicate with providers. In addition, an AI-infused empowerment evaluation approach is being used to build the evaluation capacity of an organization dedicated to criminal justice reform, ranging from helping them generate relevant AI-driven theories of change to producing AI-guided draft reports grounded in their data and lived experience.
Try this exercise: take a short piece of evaluation text you’ve written (say, a report excerpt) and ask an AI tool to rewrite it for a community audience. Then compare it against your version. What became clearer? What got lost? This kind of experimentation reveals both the promise and limits of AI, keeping empowerment and equity at the center.
Evaluation isn’t just about producing findings. It’s about helping partners develop the skills and confidence to use data long after the evaluator is gone. This means that capacity building isn’t a side task, it’s central to evaluation’s purpose. By approaching our work this way, we stop treating evaluation as a one-time project and instead see it as an investment in people’s ability to keep learning, questioning, and improving.
Try this: the next time you present findings, don’t just share results. Teach one concrete tool, like how to build a simple trend chart in Excel or how to ask a sharper “why” question of the data. Small steps like this build lasting capacity and make evaluation more sustainable. And over time, those small steps add up, helping partners feel not just informed by data but genuinely equipped to use it.
Together, these practices help evaluators not just survive disruption, but lead through it. They remind us that evaluation’s future isn’t about chasing the newest trend; it’s about staying grounded in purpose while adapting to change.
If these ideas resonate and you’d like to hear more, join us for our upcoming session: “What Evaluators Should Be Doing Now to Stay Relevant, Effective, and Grounded in Purpose.” We’ll share concrete strategies from our forthcoming book and invite discussion about how evaluators can stay adaptive and hopeful in complex times.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org . aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.