Date: Tuesday, January 20, 2026
My name is Ka Tong, and I work in child welfare within the BC Public Service as a family services social worker. I previously worked in community recreation with a focus on community development, and I now approach my work through circle-based collaboration, honoring the stories of the families I am privileged to work alongside.
I’ve spent much of my life moving between cultures, identities, and systems that were never quite designed with me in mind. As a first-generation immigrant, a racialized woman in Canada, and now a parent raising neurodivergent children, I learned early on how quickly people can be misunderstood when systems focus on what is visible rather than what is meaningful. In my work across community development, recreation, and family services, I see this pattern repeat itself from system to system. The context of the story gets lost. People become behaviours, risks, or data points.
This is why Culturally Responsive Evaluation (CRE) reflects how I try to show up in my work and in my life. I strive to be present with curiosity, humility, and an understanding that culture is foundational. Culture shapes how people communicate, how they build trust, how they respond to stress, and how safe they feel sharing their stories.
Lately, I’ve been sitting with these ideas alongside the growing presence of artificial intelligence (AI) in evaluation and research spaces. AI is now embedded in our daily rhythms. I keep returning to the question of whose intelligence does this technology reflect, and whose ways of knowing are missing?
I cannot help but feel that we continue to be caught in a colonized way of thinking and being. For generations, western systems have decided what counts as valid knowledge while overlooking lived experience, cultural wisdom, and relational ways of knowing. Technology is not separate from that history. Scholars such as Sasha Costanza-Chock, in Design Justice, remind us that tools are shaped by the values and power structures of those who design them. Chimamanda Ngozi Adichie’s talk on the danger of a single story has also stayed with me. When a narrative is repeated often enough by those with power, it becomes the only story told and known.
These single stories are all around us, all the time. Families are reduced to labels, assessments, or concerns, while their histories, cultures, and strengths fade into the background. When I think about AI entering these spaces, I worry about how easily new single stories could be produced through systems trained on data that does not reflect the families I work alongside.
At the same time, I recognize the value of AI. When used carefully, it can support CRE in meaningful ways. Tools that help with translation, captioning, or plain-language summaries can reduce barriers for families navigating language, disability, or capacity. AI can also help organize transcripts or large amounts of qualitative data, which may free up more time for the relational work that matters most.
Technology can recognize patterns, but it cannot understand trauma histories, migration journeys, or the emotional weight beneath a caregiver’s words. Without care, it can quietly reinforce the same ableism, racism, and colonial assumptions that CRE tries to interrupt.
I see CRE as a compass. It keeps interpretation grounded in relationships, context, and multiple truths. If AI is used in evaluation, it needs to stay in a supporting role, offering prompts, drafts, or visuals that invite conversation rather than determining the narrative. Most importantly, it must never replace the relational accountability we hold to the communities who trust us with their stories.
One lesson I continue to learn as AI enters evaluation and research spaces is that technology needs a cultural and relational compass. AI can support accessibility, organization, and efficiency, but it cannot understand context, culture, or lived experience on its own. Without consideration, it risks reproducing the same single stories that families are often reduced to in public systems.
Chimamanda Ngozi Adichie’s talk, The Danger of a Single Story, has stayed with me as a reminder of how easily complexity can be flattened when one narrative is repeated often enough. In addition, Sasha Costanza-Chock’s work on, Design Justice, further reinforces that tools reflect the values and power structures of those who design them. Together, these ideas ground how I think about AI as something that can assist our work, but it should never replace relational accountability, cultural humility, or the responsibility we hold to the people whose stories we are entrusted with.
The American Evaluation Association is hosting GenAI and Culturally Responsive Evaluation week. The contributions all this week to AEA365 come from students and faculty of the School of Public Administration‘s Graduate Certificate in Evaluation program at the University of Victoria. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.