Date: Saturday, March 7, 2026
Hello, AEA365 community! Liz DiLuzio here, Lead Curator of the blog. This week is Individuals Week, which means we take a break from our themed weeks and spotlight the Hot Tips, Cool Tricks, Rad Resources and Lessons Learned from any evaluator interested in sharing. Would you like to contribute to future individuals weeks? Email me at AEA365@eval.org with an idea or a draft and we will make it happen.
Hello, I’m Jacqueline Singh, MPP, PhD (she/her), an Executive Evaluation & Program Design Advisor based in Indianapolis, Indiana. With more than 30 years of experience across higher education, government, not-for-profit, and pro-bono settings, much of my work now focuses on front-end evaluation planning—slowing things down just enough for partners to see what they are building before deciding how to evaluate it.
In an earlier AEA365 post, I wrote about the power of document models in program design. Since then, I’ve continued to use document models across program theory development, evaluability assessment, grant proposals, strategic planning, research, and policy analysis. Over time, I’ve come to see document models less as a design tool alone and more as a credibility-building practice at the very front end of evaluation work.
I often describe this part of my work as “pausing before logic”—a deliberate choice to avoid the rush to evaluation and slow things down just enough to demonstrate sound judgment before formalizing evaluation designs or analytic choices.
In practice, there can be pressure to move quickly to logic models, surveys, or performance measurement. These tools are valuable, but when introduced too early they can harden assumptions that haven’t yet been examined or shared. In complex systems and turbulent policy environments, that rush can quietly undermine both the evaluation itself and the evaluator’s credibility.
What’s written in program-related documents may not always be what’s intended—and not always what people think they intend to do. When those gaps remain invisible, misalignment follows.
A document model is a visual or structured representation of an existing document—such as a grant proposal, conceptual framework, policy, or strategic plan. In higher education settings, it could be something as innocuous as a course syllabus. Unlike summaries or critiques, document models mirror what’s already there. They surface embedded logic, assumptions, priorities, sequencing, and omissions without judgment. As with any modeling practice, document models are provisional and open to correction by stakeholders.
I often create a draft document model as “pre-work” before meeting with partners. That single move—working from their document rather than my own framework—almost always captures attention. More importantly, it establishes credibility early by demonstrating care, restraint, and analytic discipline.
Used this way, document models function as credibility artifacts. They make evaluator judgment visible by showing that someone has taken the time to create a model to understand the intervention as written before proposing questions, frameworks, or methods.
That credibility, in turn, creates the conditions for trust. Partners are more willing to engage, correct, and reflect when they feel accurately understood. Quietly, document models also model a different habit: thinking before measuring; seeing clearly before leaping.
This practice draws from long-standing traditions in program theory and evaluability assessment associated with Leonard Bickman and Joseph Wholey, as well as document analysis traditions described by Elizabeth J. Whitt. Used as a front-end discipline, document models strengthen—rather than replace—evaluative tools such as logic models, while also informing subsequent evaluation design.
In practice, evaluation work can become routinized—essentially, process begets process, sometimes with diminishing value. At a time when credibility of evidence and evaluators alike is under scrutiny, practices that demonstrate judgment and good faith matter.
For me, pausing before logic and using document models as credibility artifacts has become one reliable way to meet partners where they are, establish trust through understanding, and design evaluations that are both useful and legitimate—especially in complex systems.
Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to aea365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.