Date: Tuesday, February 24, 2026
Hello, I’m Bethany Edwards, and I work as an evaluator inside a multi-service nonprofit organization called Center for Transforming Lives in Fort Worth, Texas. Much of my work sits at the intersection of accountability and learning—helping teams make sense of data while navigating the real-world complexity of serving families across very different program areas. Over time, I’ve learned that strong evaluation practice depends less on having perfect systems and more on building habits that support curiosity and reflection.
Below are a few lessons that have shaped how I approach evaluation in complex nonprofit settings.
Early in my career, I treated curiosity as something individual: an evaluator’s responsibility to ask good questions. Over time, I’ve learned that curiosity is far more powerful when it becomes an organizational practice.
In practical terms, this means designing routines that invite questioning: consistent, predictable data reporting and dashboard discussions, reflection prompts embedded in reports, and regular pauses to ask what surprised us or what still feels unclear. These small, structured moments help shift evaluation from a compliance exercise to a shared learning process, especially in organizations where programs operate on very different timelines and definitions of success.
When curiosity is built into the system, learning becomes more consistent—and less dependent on any one person.
In complex environments, data problems are inevitable. What has helped most is slowing down long enough to understand what kind of problem we are facing before jumping to solutions.
Using a simple Evaluation Troubleshooting Grid, teams can sort challenges into categories such as clarity, feasibility, relevance, or infrastructure. Is a measure misunderstood? Too burdensome to collect? Misaligned with how the program actually works? Or constrained by system limitations?
This approach has helped us respond more thoughtfully and avoid quick fixes that don’t address the root issue. It also creates space for honest conversations about when an indicator needs refinement or when curiosity is signaling that we are asking the wrong question altogether.
While formal training has its place, I’ve found that evaluation capacity develops most effectively through regular, low-stakes use of data. Brief conversations around a single chart, trend, or qualitative insight often do more to build confidence than lengthy workshops.
These moments allow staff to see how data connects to their work and decision-making in real time. Over time, this shifts evaluation from something that happens to programs to something that happens with them—strengthening both data quality and organizational learning.
Evaluating complex nonprofit programs will always involve uncertainty. Rather than avoiding that uncertainty, I’ve found it more productive to create structures and norms that make curiosity visible, shared, and actionable. These practices have helped our organization ask better questions—and use what we learn to improve along the way.
The American Evaluation Association is hosting Texas Evaluation Network (TEN) Week with our colleagues in the TEN Affiliate. The contributions all this week to AEA365 come from TEN members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.