Date: Sunday, March 15, 2026
Hello AEA365 readers. I’m Wheeler del Torro, Co-Founder and Chief Strategy Officer at Evaluation + Learning Consulting (ELC). We a monitoring, evaluation, research, and learning firm that works with public agencies, nonprofits, and foundations to strengthen evaluation practice and build analytic capacity. Much of our work focuses on helping organizations make better use of the data they already collect. This week, I have the pleasure of serving as guest curator for a series of posts from members of our team.
If you spend time in evaluation spaces, you’ll often hear a familiar divide: qualitative evaluators on one side, quantitative analysts on the other.
For many evaluators, the quantitative side of the field can feel intimidating. Statistical software, coding languages, dashboards, automation tools, and AI-assisted analysis belong to a rapidly evolving technical landscape that grows more sophisticated each year.
But in practice, evaluators rarely need to become full-time data scientists. What they often need instead are practical ways to explore data, clean messy datasets, and present findings clearly enough to support decisions. In other words, the goal is not technical mastery. The goal is making sense of the information organizations already have.
The biggest barrier to using data in evaluation is rarely the lack of data. It is the challenge of working with it.
Organizations today collect enormous amounts of information through surveys, administrative systems, and program records. Yet much of that information remains underused. Data may be difficult to clean, difficult to interpret, or difficult to present in ways that resonate with stakeholders.
The posts this week highlight several practical ways evaluators navigate those challenges.
On Monday, Sarah will share how she uses spatial analysis to explore patterns that traditional tables can miss. Seeing data on a map often reveals relationships that are difficult to detect in spreadsheets.
On Tuesday, Liz will introduce a practical approach for simplifying repetitive data preparation using Power Query, a tool built into Excel that can automate many of the tedious steps involved in cleaning datasets.
On Wednesday, Christian will discuss how relevance can accelerate learning when teaching data skills, and how working with real-world data can make quantitative methods feel far more accessible.
On Thursday, Armand will explore design choices that make dashboards more useful for decision-makers, focusing on how visualizations can help people interpret data more quickly and accurately.
On Friday, Danni and Sabrina will reflect on evaluating youth programs that share a mission but require very different evaluation approaches, reminding us that context often matters more than method.
On Saturday, Kimberly will discuss why structure and documentation remain essential even when using AI-assisted tools, and why careful data practices still sit at the foundation of credible analysis.
Across these posts runs a common thread: you do not need to master every tool in the analytic toolbox to benefit from them. Sometimes a single new approach or technique can make data easier to work with and easier to explain.
I hope the reflections shared this week broaden how you think about quantitative analysis and provide you with practical ways it can support your evaluation work.
Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.