Date: Wednesday, January 21, 2026
Hello! I’m Terri-Lyn Bennett Campbell, an evaluator living and working in the unceded territories of the Sḵwx̱wú7mesh Úxwumixw (Squamish Nation), in the Sea-to-Sky corridor of British Columbia between Vancouver and Whistler. I come to evaluation with a background in public health epidemiology and policy development. Recently, my work has focused on culturally responsive and equity-centered evaluation, as those commitments meet the rapid rise of artificial intelligence (AI).
Like many of you, I’m watching AI move into public programs, nonprofits, and evaluation practice at a pace that feels faster than most of us can comfortably keep up with. Whether we feel ready or not, AI is shaping how information is collected, analyzed, and interpreted. For me, the integration of AI into evaluation is not simply a question of efficiency or innovation, but one of social justice.
The question that keeps coming up is not “How do we use AI?” but rather How do we use AI in ways that strengthen, rather than erode, culturally responsive evaluation (CRE)?
AI tools can support evaluators with routine tasks such as transcription, summarizing documents, or sorting large datasets. In theory, that efficiency could free up more time for the relational, participatory work at the heart of CRE. Accessibility features, like speech-to-text, translation tools, or voice interfaces, can also help reduce barriers for multilingual communities or people with disabilities. We need to keep in mind however that efficiency is never neutral.
As scholars such as Sasha Costanza-Chock argue through the lens of design justice, AI systems learn from historical data shaped by long-standing inequities. When data reflect colonialism, racism, sexism, or ableism, AI can reproduce those same patterns, only faster and at greater scale. Without care and reflection, AI and algorithmic systems risk reinforcing the very injustices that evaluation often seeks to surface.
Recent research, such as a study by Cabanillas-García and colleagues, shows that while evaluators may appreciate AI for technical or administrative tasks, there is understandable hesitation about using it for deeper interpretation and meaning-making. This makes sense. Generative AI tends to gravitate toward the most “probable” story. CRE, by contrast, asks us to notice contradictions, emotions, silences, and culturally specific meanings.
AI can support pattern-finding, but meaning-making must remain with people, and ideally with communities themselves. Without shared interpretation and dialogue, AI risks becoming extractive, summarizing lived experiences without voice, context, or consent.
Many AI ethics frameworks emphasize privacy, transparency, or accountability. These matter, although they usually stop short of addressing essential questions of power: Who designs AI? Who controls it? And who is governed by it?
Ricaurte & Zasso remind us that digital systems often universalize the worldview of those with institutional power. CRE pushes evaluators to continually ask: Whose voices shape the knowledge being produced? Whose experiences are missing? Who benefits—and who is harmed?
AI can help expand accessibility, surface inequities, and reduce the burden of some evaluation tasks, but it should never replace the conversations, relationships, and shared meaning-making that are central to CRE. One guiding question I return to is: Does this use of AI deepen community voice, or diminish it?
AI is already influencing the future of evaluation, but it should not determine our values. Culturally responsive evaluation offers a compass for keeping power awareness, people, and context at the center. If communities help lead decisions about how and when AI is used, we can harness its benefits without sacrificing the relational and ethical foundations that make evaluation worth doing in the first place.
The American Evaluation Association is hosting GenAI and Culturally Responsive Evaluation week. The contributions all this week to AEA365 come from students and faculty of the School of Public Administration‘s Graduate Certificate in Evaluation program at the University of Victoria. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.