Date: Friday, May 30, 2025
Hi everyone, I’m Sujan Anreddy, working on technology solutions alongside public health professionals to improve access to care and community services, especially for underserved populations in rural Mississippi. In this post, I want to share how Retrieval-Augmented Generation (RAG) chatbots can support resource navigation, and how we can think about evaluating these tools.
Over the years, I’ve watched people struggle to locate basic help—food pantries, transportation for seniors, or home health providers. Community resource directories are often outdated, hard to search, or buried in PDFs. For older adults or caregivers without strong digital literacy, this makes already tough situations worse.
That’s why I’m excited about RAG pipelines, which blend AI search and response generation in real-time. The chatbot doesn’t just “guess” an answer—it retrieves relevant data from the local community resource directory and then generates a tailored response, using a language model like GPT or LLama.
We built a prototype using LangChain, a Python framework that links document retrieval to large language models. Our knowledge base included local agency listings, service descriptions, and other agency related data. The result? Users could chat in natural language and get real answers backed by verified data.
But building is just half the story, evaluating the tool is where the real work begins. I found these strategies essential:
For transparency, we used ? Getting Started – ? TruLens to log prompts, retrieval quality, and hallucination risks in real-time.
If you’re evaluating similar tools, make sure your data sources are locally grounded. Even the best model fails if the retrieval base lacks coverage. We pulled ours from agency spreadsheets, state portals, and used OCR to scan pdfs. Every agency in our database is verified and categorized based on WHO’s age friendly framework.
The chatbot doesn’t have to be perfect, it just has to be useful and trustworthy. A simple phrase like “I didn’t find a match in your area, but you can try calling 211 or visiting your local community center” helped build credibility.
The American Evaluation Association is hosting Integrating Technology in Evaluation (ITE) TIG Week. The contributions all this week to AEA365 come from ITE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.