Date: Tuesday, February 3, 2026
Greetings! I am Daniela Maciel Pinto, an evaluator and researcher working in a public agricultural research organization in Brazil, at the intersection of agricultural research, evaluation, and innovation management for research governance. I joined the last Evaluation Conference in Kansas City thanks to the International Travel Award from the American Evaluation Association, where I presented evaluation research conducted across South America, Europe, and Oceania. Over the past few years, I have been asking a deceptively simple question: What happens to evaluation results after they are delivered?
This question is central to my PhD research. Rather than focusing on impact as an assumed outcome, I examine how evaluation results are taken up, interpreted, and mobilized within public agricultural research organizations for generating impact.
Across the literature and empirical evidence, one finding stood out consistently: producing high-quality evaluations does not guarantee their use. In agricultural R&D organizations evaluation results are still predominantly used for accountability and external reporting, while learning, prioritization, and strategic decision-making remain secondary. Importantly, this limitation is rarely methodological; it reflects organizational and institutional conditions that shape whether evidence informs action.
Below, I share what emerged from this research and from exchanges during the conference.
One practical strategy is the use of reflexive diagnostic frameworks to examine how evaluation results are used. In this context, I introduced AgroRadarEval, an interactive framework designed to support reflection on the use of evaluation results in agricultural research organizations. Rather than prescribing “best practices,” AgroRadarEval helps organizations explore where evaluation results circulate, who engages with them, and how (or if) they influence planning, prioritization, and learning.
What makes this approach useful is the shift in perspective it enables: from “Why are results not being used?” to “What organizational conditions shape use or non-use?”
If evaluation is meant to support better research and greater societal relevance, then producing evidence is not enough. Understanding how results are absorbed, debated, and acted upon is equally critical, especially in public research organizations and Global South contexts.
For evaluators, shifting attention from delivering results to enabling their use remains one of the most pressing – and under-addressed – challenges in evaluation practice today.
The American Evaluation Association is hosting International and Cross-Cultural (ICCE) TIG Week with our colleagues in the International and Cross-Cultural Topical Interest Group. The contributions all this week to AEA365 come from our ICCE TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.