Date: Saturday, May 9, 2026
How do you ensure your evaluation data is used? From project design, to stakeholder engagement, to report writing and data sharing, evaluators have many opportunities to help ensure the data we collect are actually used for decision-making. In this series of blog posts, Washington Evaluators draws on our experiences in Washington, DC and beyond to share best practices for making evaluations useful.
Asalamalekum!! My name is Tristi Nichols, and I am a principal at Manitou, Inc, and I currently serve as the Secretary for Washington Evaluators.
Evaluation only matters if someone appreciates it—and then uses it. Making evaluation useful is less about methodology and more about relationships. It is about meeting people where they are, asking clarifying questions early, staying on message, and focusing on clients’ needs for future decision-making. If we do that well, the findings will not only be embraced but showcased.
Throughout my career in international evaluation with the United Nations and NGOs, I have learned that usefulness does not happen at the end of a project. It starts at the beginning and continues throughout every phase of the evaluation process. Some of the organizations that I have worked with are the United Nations Office of Internal Oversight Services (OIOS), UNFPA Independent Evaluation Office, and CARE Angola, Care Zambia, and Care Somalia (part of Care Canada).
Beginning with design, before drafting an Inception Report, a single survey question, or an interview guide, I ask a simple question: *What decisions do you need to make now?* Not six months from now. Not “in general.” I mean specific decisions tied to real timelines.
In global contexts, it is not just the client’s needs that matter, but also those of governments and communities. For instance, an official from a Ministry of Women and Children Affairs may want to focus on general access to maternal health services, while a donor or NGO partner may need to address the Leaving No One Behind (LNOB) agenda. These priorities are not in conflict, but trying to address all concerns equally can contribute to response fatigue. Finding the right balance requires nurturing relationships and identifying where compromise is possible.
A little humor goes a long way. Keeping things light—so that we can laugh together about how long surveys can be—helps build the kind of rapport that leads to honest conversations and, ultimately, a valued dataset.
Stakeholder engagement in multicultural settings also requires gathering honest feedback in more than one way. This might mean holding smaller group discussions instead of large forums or creating anonymous ways for people to share candid feedback.
My doctoral advisor, Dr. Jennifer Greene, once told me that “evaluation starts with silence.” That means listening more than talking. People are far more likely to use findings that they helped to develop and shape.
When it comes to reporting, clarity beats fancy wording every time. Plain language is not a downgrade; it is a strategy. If a program manager has to reread a sentence three times or pause to interpret its meaning, I have already lost them. Similarly, I focus on answering the “so what?” question as directly as possible. What does this mean for your program next month or next quarter? Why does it matter in this context?
I also ask clients—explicitly—how they want findings presented for different stakeholder groups. Do they need a short brief for senior leadership? Slides for a community meeting? A detailed annex for technical colleagues? There is no single “right” product. There is only what is useful in that context.
Finally, data sharing is critical. In international work, access can be uneven, and stakeholders’ attention bandwidth varies widely. A short in-person briefing with clear talking points may work for one group, while an attractive printed summary may be more effective for another.
The American Evaluation Association is hosting Washington Evaluators Week with our colleagues in the Washington Evaluators local affiliate group. The contributions all this week to AEA365 come from Washington Evaluators members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.