Date: Thursday, January 22, 2026
Hi, I’m Kevin Parsons, a student in the Graduate Certificate in Evaluation. The growing influence and quick evolution of artificial intelligence (AI) is changing how knowledge is gathered, created, analyzed, and presented. For evaluators, it can be seen as a powerful tool that allows them to work faster, whether it’s synthesizing information, assisting research, or analyzing data. However, while AI technology can allow evaluators to be more efficient, there are also concerns that need to be managed about its use. Specifically, the use of AI may be at odds with the values of culturally responsive evaluation (CRE), such as focusing on relationships first and data sovereignty, along with how the underlying models are trained. The core question goes beyond whether the use of AI in evaluation is ethical, but whether its use can ever be relational or if certain applications of AI will damage the relationship between evaluator and community.
AI is not going anywhere any time soon, and it can be a useful resource for evaluators. However, its use runs the risk of repeating evaluation errors of the past because no system or tool designed by humans is neutral. The “black box” nature of AI means that users do not know how they were designed, by whom, and what data was used to train them. Since humans were involved in the creation, biases exist even if there is no human “writing” the words that appear on the screen.
Researchers have demonstrated the importance of the interconnection of context and relationships when performing CRE, where participatory practices led to a genuine partnership and co-construction of meaning within the evaluation. This requires evaluators to share knowledge generation with those who are participating in the evaluation and recognize the falsehood of evaluator impartiality, which can be extended to the use of AI. Research, data collection, and analysis with the intention of measuring outcomes only makes sense when taken in the context of the people, place, and culture in which the program is being facilitated. If the proper context is not, or cannot, be included in an AI model, there is the risk that any conclusions drawn from it may mistake patterns for understanding. Worst case scenario, conclusions drawn from the information produced by AI and not grounded in culture run the risk of damaging relationships.
As evaluators, it is typical that we are not members of the community and culture(s) that we are being asked to evaluate. Evaluators may find that the communities they enter have good reason to mistrust them. This is why it is so important to recognize our place as an outsider within the evaluation process and focus on building relationships with trusted community partners.
It is possible that the use of AI in evaluation may be used in a way that puts too much emphasis on frameworks and knowledge of those systems, which are even further outside the community. For example, if an AI model has primarily been trained using western data and worldviews, it is likely what is generated by the model speaks about other communities and cultures as an outsider. A model that interprets the stories of other cultures or summarizes community interviews without cultural fluency may reproduce the misrepresentations that CRE attempts to correct.
Use AI tools such as just that: another tool. However, don’t put them in between the relationship of evaluator and community. Think intentionally about how to use this tool; while keeping in mind it’s limitations, and always ensure that the respect of the community and the information that they share with you is put at the forefront.
The American Evaluation Association is hosting GenAI and Culturally Responsive Evaluation week. The contributions all this week to AEA365 come from students and faculty of the School of Public Administration‘s Graduate Certificate in Evaluation program at the University of Victoria. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.