Date: Saturday, January 24, 2026
Hello! My name is Sienna Jae Taylor. I’m a nonprofit professional and a student completing my Master’s in Community Development and a Graduate Certificate in Evaluation at the University of Victoria. In this post, I focus on how differences between closed and open-source AI raise important considerations for culturally responsive evaluation (CRE).
As evaluators increasingly encounter AI tools for data analysis, reporting, or sensemaking, questions of ownership and governance are often overlooked. From a culturally responsive perspective, however, who controls AI systems matters just as much as what they do. The considerations below highlight how different ownership models shape transparency, power, and participation in evaluation practice.
Closed or proprietary AI systems limit visibility into how data are collected, analyzed, or interpreted, making it difficult for evaluators and communities to understand or challenge how conclusions are reached. When AI systems operate as “black boxes,” this lack of transparency undermines shared meaning-making and accountability—both central to CRE.
In contrast, open-source AI allows evaluators and communities to examine how tools are trained and how data are used. This openness can support shared interpretation, local adaptation, and more culturally grounded sensemaking. While transparency alone does not ensure equity, open-source systems provide a stronger foundation for participation and trust.
Reflection Question: Can I clearly explain how this AI tool works, who controls the data, and how interpretations are made—and question the process?
AI systems are never neutral. Closed AI often concentrates decision-making power among a small group of developers and organizations, embedding dominant assumptions about what knowledge matters and how problems should be defined. These embedded worldviews can marginalize community knowledge and unintentionally reproduce inequities in evaluation practice.
Open-source AI does not eliminate bias, but it can redistribute power by enabling broader participation in reviewing, adapting, and questioning how AI systems function. This creates space for reflexivity and for challenging whose knowledge is embedded in evaluation tools—an important consideration for culturally responsive evaluators.
Reflection Question: Whose assumptions and values are reflected in this AI tool, and does it leave room for community knowledge and alternative ways of knowing?
Open-source AI is often positioned as a more equitable alternative because it allows users to modify and adapt tools to their own contexts. This flexibility can support local innovation, shared governance, and co-design—principles that align closely with CRE.
At the same time, openness does not automatically translate into meaningful participation. Communities may lack the resources, technical capacity, or influence needed to shape how open-source AI tools are developed or used. Without intentional efforts to support co-design and collective decision-making, even open systems can reproduce existing power imbalances.
Reflection Question: Who has real influence over how this AI tool is selected, adapted, or governed—and what would need to change for communities to help shape it?
Comparing closed and open-source AI highlights why ownership and governance matter for CRE. While closed systems often limit transparency and participation, open-source approaches offer greater potential for shared meaning-making, reflexivity, and shifts in power. At the same time, openness alone does not guarantee equity; meaningful alignment with CRE depends on intentional practices that support community participation, co-design, and ongoing reflection about who benefits—and who remains excluded—when AI is introduced into evaluation work.
While AI inevitably reflects the biases of its creators, those creators can change. AI doesn’t have to be something communities simply use—it can be something they build. This presents a powerful opportunity not only to strengthen relationships with community members, but also to empower them as creators and co-designers.
The American Evaluation Association is hosting GenAI and Culturally Responsive Evaluation week. The contributions all this week to AEA365 come from students and faculty of the School of Public Administration‘s Graduate Certificate in Evaluation program at the University of Victoria. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.