Date: Tuesday, July 15, 2025
Hi, we’re Gizelle Gopez and Jenica Reed from Deloitte’s Evaluation and Research for Action (ERA) Center of Excellence (CoE), writing on behalf of the Health Evaluation TIG. One of our favorite things about being a part of ERA is the opportunity to explore and challenge ourselves in the different ways of conducting program evaluations. This includes exploring different Generative AI (GenAI) tools available in the market and piloting its application across the evaluation lifecycle. Yesterday you heard about GenAI in literature reviews, and today we’re going to talk about using AI to develop logic models!
Getting everyone on the same page when kicking off a program evaluation is super important. It’s like having a game plan that connects what you do to what you want to achieve. There are tools now that can whip up inputs, activities, outcomes, and even create visuals like logic models from documents or prompts. These tools help draft an initial program description for feedback. To make the most of GenAI, give clear prompts about what you need in your program narrative or logic model, and don’t forget to mention the context, goals, and any special instructions like language or tone.
Hot Tip #1: You’ll want to provide the tool information on the topic or population of focus and who may be using the logic model, and other contextual factors of the program (context), the specific goal you want the AI to accomplish (objective), and any specific outputs you’d like to see generated (instructions). Depending on the tool used, you may get a text output or a visual of your program.
Hot Tip #2: If you have tools that can upload documents, ask GenAI tool to summarize main activities, outcomes, and goals. Then, take that summary and create a logic model, using the prompt structure from Hot Tip #1.
Hot Tip #3: Even if your GenAI tool doesn’t make visuals, it can help kickstart program descriptions. GenAI outputs are just drafts; clients and team members should always validate the initial outputs. Like any logic model made from scratch, there should be several rounds of feedback from key people, decision-makers, and folks who benefit from the program. Getting different takes on the tool’s output is super helpful. Make sure you get the evaluator involved to use GenAI well in evaluation.
Hot Tip #4: Finally, while GenAI can help to aid in efficiency, it’s important to pause and thoughtfully consider just how much AI should be involved in the planning, design, and execution of your evaluation.
There are different types of tools out there in the market, but which tool you use is dependent on your budget, project compatibility with GenAI, and the implications of its use including the resources used to produce any sort of output.
Have any suggestions for considerations, prompts, or other tools you’ve used for developing logic models? We’d love to hear about them. Drop them in the comments below!
This posting contains general information only, does not constitute professional advice or services, and shall not be used as a basis for any decision or action that may affect your business. Deloitte shall not be responsible for any loss sustained by any person who relies on this posting. As used in this posting, “Deloitte” means Deloitte Consulting LLP, a subsidiary of Deloitte LLP. Please see www.deloitte.com/us/about for a detailed description of our legal structure. Certain services may not be available to attest clients under the rules and regulations of public accounting. Copyright © 2025 Deloitte Development LLC. All rights reserved.
The American Evaluation Association is hosting Health Evaluation TIG Week with our colleagues in the Health Evaluation Topical Interest Group. The contributions all this week to AEA365 come from our Health Evaluation TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.