Date: Tuesday, May 27, 2025
Hello! I’m Valerie Ehrlich, PhD, a learning and evaluation consultant and coach helping monitoring, evaluation, research and learning (MERL) leaders and mission-driven organizations reach their goals at the intersection of learning, leadership, and AI. As a qualitatively trained researcher, I’ve recently been experimenting with tools like CoLoop, NotebookLM, ChatGPT and Claude—generative-AI-powered LLMs that let us engage with qualitative data in compelling new ways: by talking to it. It’s a shift in both method and mindset, with exciting possibilities—and important risks—to navigate.
When I learned qualitative methods 20 years ago, I remember cutting quotes into strips of paper, building exhaustive codebooks, and calculating inter-rater reliability by hand. Later, tools like Dedoose helped streamline data management. But now, AI-powered tools allow us to “talk” to our data conversationally, exploring patterns in real-time, applying multiple theoretical frameworks to the same data set, questioning our own interpretations, and refining our understanding as though we’re talking to a colleague. These tools let us be curious about our data in a way that mirrors how we naturally make meaning—through inquiry, reflection, and conversation.
CoLoop ($200/month), ChatGPT or Claude ($20/month), and NotebookLM (free with paid options) all allow you to query your qualitative data conversationally. CoLoop is designed for research and includes features like high-quality transcription, built-in analysis grids, de-identification, and concept testing—great for larger projects. But, if you’re experimenting with just a few transcripts (be sure to de-identify as needed), Claude, ChatGPT, or NotebookLM are powerful low-cost starting points.
Once your transcripts are uploaded, the possibilities are vast. You can:
This iterative querying enables a new organic process of discovery and allows us to refine the questions we’re asking the data as new insights emerge.
In one project, I asked Claude to take the themes I’d coded and reshape them for a UX audience. It translated the findings into terminology that resonated with that group—without distorting the core insights. I couldn’t have done that nearly as efficiently on my own. That moment showed me what’s possible when we combine our own judgment with AI’s generative power.
Your topical and methodological expertise remains central. AI should amplify, not replace, your thinking. These tools are powerful, but your expertise, and deep familiarity with your data, is what keeps them grounded and ethical.
The shift to using AI-powered tools for data analysis can feel overwhelming (I speak from experience!). Start experimenting with data you’ve already collected and/or coded. Compare what you learn from talking with the AI tool with what you gain from your analytic memos and manual codes. Notice where they align, where they diverge, and what surprises you.
Treat it like a conversation: ask questions, follow threads, and double back. Anchor the insights in theory, context, or stakeholder needs to give them meaning beyond surface patterns.
AI-assisted qualitative analysis feels fundamentally different—especially for those of us who’ve spent hours with highlighters and Denzin & Lincoln’s Handbook. But that foundational expertise is exactly what enables us to use these tools thoughtfully, balancing speed with depth.
Of course, speed means little without validation. I routinely test AI outputs against transcripts I know well—comparing what the model captures, what it misses, and what it distorts. This builds my confidence in how (and when) to trust its outputs.
Qualitative data is rich and often underused because of how resource-intensive it is to analyze. These tools can help us get meaningful insights into the hands of stakeholders faster—without sacrificing rigor. In a world of complex challenges, that feels like an opportunity worth exploring.
The American Evaluation Association is hosting Integrating Technology in Evaluation (ITE) TIG Week. The contributions all this week to AEA365 come from ITE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.