Date: Friday, January 23, 2026
Greetings! My name is Tash McLeod and I recently completed a Graduate Certificate in Program Evaluation from the University of Victoria’s School of Public Administration. With a background in Gender and Critical Race Studies, hospitality leadership, and lived experience as a mixed-race immigrant woman, I apply an intersectional approach and incorporate evaluative practices into my work as Operations Manager for WORTH Association, a national nonprofit advancing equity in Recreation, Tourism, and Hospitality.
Our theory of change begins with programming that utilizes a culturally responsive evaluation approach at every stage of the intervention from conception, development, and implementation through to evaluation and refinement. One such initiative is the WORTH Mentorship Program, and what follows is a case study examining how generative AI intersects with culturally responsive evaluation (CRE) in real-world nonprofit practice.
In late 2024, we redesigned the WORTH Mentorship Program. One of the tools developed for this new iteration was the application intake form used to assess potential mentors and mentees’ eligibility and fitness, as well as pair the matches.
Prior to this, the organization utilized paid software that employed AI pairing algorithms. As the intermediary user and client, we did not have knowledge of the back-end coding or how these algorithms operated, but the results were nevertheless unsatisfactory for a variety of reasons. The anticipated outcomes for the program and participants were not as effective or responsive as we hoped. Attrition was high, engagement was low, and participant satisfaction was abysmal. The post program survey revealed multiple instances of participants feeling mis-matched, unsupported by the organization, and frustrated navigating the platform.
For a program designed to be accessible and low barrier and to support, uplift, and build community, this was less than ideal. It removed our ability to have meaningful dialogue, build relationships, analyse data, and collaborate in ways that would lead to a co-created, better targeted and responsive program. AI in this instance proved problematic in that it silenced community knowledge and privileged “data-driven” insights over the relational, narrative, and contextual. The functionality, analytics, and reports the software promised would “win back my time” instead flattened the richness of life into data points and metrics.
In our redesign, we adopt a “people-first” approach. This involves reading every word our applicants choose to share, and deliberating until matches are unanimous. This sometimes requires long, persuasive conversations or even inviting prospective participants into the process, which allows us to stand behind our matches and maintain the trust of our community. It is, however, incredibly time intensive. To analyse this volume of data and clean it for ease of reference in pairing, I utilize the paid version of ChatGPT to condense and code applicant responses, asking it to identify key themes and patterns. In this way, AI serves as a de facto research assistant that helps to mitigate information overload while preserving our human judgement.
That human touch has gone a long way. Over the past two cohorts we have seen record low levels of attrition, record high engagement, and the program has a 4.9/5 star rating. Participant testimonials routinely use words like, “life-changing,” “game-changing,” “transformational,” and “incredible.” We have been asked, “how could you have possibly known to pair me with the perfect [mentor/mentee]?” The gratitude is in abundance, with enthusiastic and heartfelt “thank you for this community” messages, both personally and on public social media forums.
When comparing the outcomes of fully automated AI software pairing with the results of AI assisted manual pairing, we can see the opportunities to counter the shortcomings of machine learning and the potential benefits it can bring to CRE.
The American Evaluation Association is hosting GenAI and Culturally Responsive Evaluation week. The contributions all this week to AEA365 come from students and faculty of the School of Public Administration‘s Graduate Certificate in Evaluation program at the University of Victoria. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.