Date: Monday, January 19, 2026
Hello! My name is Sam Young, and I’m a graduate student at the University of Victoria in the Masters in Community Development program. I also work full time as a project manager at a community involved non-profit within BC. In a recent class assignment, we were asked to think critically around how artificial intelligence is being integrated into evaluation work, and what that change might mean for relationship, accountability, and care in relation to Culturally Responsive Evaluation (CRE).
AI is often framed as neutral, efficient, and objective; a way to reduce bias and human error. The more I think about it, the more I question what gets lost when evaluation moves further away from human relationship, context, and lived experience.
Knowledge today is increasingly treated as something we retrieve rather than something we share. It’s common to hear “just Google it,” instead of learning through conversation, story, or reflection. Thinkers like Ursula Franklin and Neil Postman warned that technologies built around efficiency quietly change how we view the world. AI continues this extractive mindset by transforming lived experience into data: measurable, sortable, and seemingly objective, but often disconnected from meaning.
This is where CRE feels especially important to me. CRE understands evaluation as a human practice, grounded in trust, relationship, and accountability to community. Scholars like Stafford Hood, Rodney Hopson, and Katrina Kirkhart remind us that evaluation is never neutral: it reflects values, power, and worldview.
CRE also requires reflexivity. Evaluators are expected to think critically about who they are, where they sit, and how their decisions affect others. AI systems simply can’t do this. They don’t build relationships, sit with discomfort, or reflect on their own positionality.
A growing body of research shows that AI often reproduces the same inequities it claims to solve. Work on postcolonial computing points out that many AI systems are developed in Western, corporate contexts and shared as universal solutions. When applied to complex social issues, they can erase local knowledge and cultural context.
Data mining and predictive analytics may identify patterns, but those patterns are shaped by historical inequities and biased datasets. The result is technology that can reinforce harm while appearing neutral. Reports from the AI Now Institute have also shown how these systems concentrate power and limit meaningful community influence, even when participation is promised.
From a CRE perspective, this is concerning. Evaluation that removes people from the process, or reduces them to categories, risks repeating colonial patterns of knowledge extraction rather than supporting learning or accountability.
One lesson that has really stayed with me is this: if evaluation isn’t grounded in relationship, it isn’t culturally responsive, no matter how advanced the tool. CRE depends on reciprocity, dialogue, and shared interpretation. These are not things AI can replicate.
AI will likely continue to play a role in evaluation, but it shouldn’t replace the relational, ethical, and contextual work that culturally responsive evaluation demands. Franklin reminds us that technology should serve human values, not override them.
For me, the real question moving forward isn’t whether AI can be used in evaluation, but whether we are willing to protect the human heart of the work while doing so.
The American Evaluation Association is hosting GenAI and Culturally Responsive Evaluation week. The contributions all this week to AEA365 come from students and faculty of the School of Public Administration‘s Graduate Certificate in Evaluation program at the University of Victoria. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.