Date: Saturday, May 16, 2026
Hello, I’m Ruqayyah Abu-Obaid, I hold a PhD in interdisciplinary evaluation, and economics is the lens I deliberately bring to evaluation theory and practice. My research examines how decisions about evaluation are made under constraints, and how economic concepts like opportunity cost and marginal value can sharpen both analysis and reporting. In this post, I make one of those hidden trade-offs visible.
Most evaluation visualizations are descriptively accurate but decision-theoretically incomplete. They show where resources went and what outcomes were achieved, but rarely what was surrendered by not choosing differently. Opportunity cost — the value of the best foregone alternative — is the central piece of information a decision-maker needs to assess whether an allocation was justified. Standard evaluation charts omit it entirely.
Every allocation of funding, analytic attention, or evaluation effort reflects a choice among alternatives. Yet standard charts present allocations as if they occurred in isolation. Research in behavioral economics identifies two mechanisms that make invisible alternatives systematically ignored: the availability heuristic holds that options not cognitively accessible are not weighed; status quo bias predicts that default allocations persist because alternatives are not foregrounded. Visualization is therefore not neutral, it functions as choice architecture, shaping which decision paths appear possible.
There is also a neglected economic reality: evaluation resources have a production function. Different allocations generate different amounts of decision-relevant knowledge. A mature, well-funded program yields less new knowledge per additional dollar than an under-studied one where basic question remains open. Most evaluation reporting does not represent this relationship.
Consider an agency allocating $10M across three programs (Table 1). Program A receives $6M but is mature, generating 2 units of knowledge per $1M. Program B receives $3M at 3 units per $1M. Program C receives $1M but is largely unstudied, generating 6 units per $1M, the highest marginal return. This reflects a well-established pattern: marginal learning is higher in data-sparse domains. Program C is not a better program; it is an under-evaluated one.
Table 1. Funds allocation between three projects
The baseline produces 27 total units (12 + 9 + 6). Shifting just $1M from A to C produces 31 units (10 + 9 + 12) — 4 additional units of learning with no budget increase. The standard bar chart showing where $10M went, as in Figure 1, never surfaces this gap. The opportunity cost of the baseline allocation is 4 units of evaluation yield: knowledge that could have existed, but does not.
Opportunity cost–informed visualization requires three elements absent from standard charts: a counterfactual panel displaying alternative allocations alongside the actual one; a marginal return indicator showing knowledge generated per dollar; and a foregone value annotation naming the opportunity cost explicitly. Health economics has produced equivalent visualizations, incremental cost-effectiveness planes, for decades. Evaluation practice has not. Emerging AI-assisted tools are beginning to lower the barrier: counterfactual scenarios can be generated from evaluation investment histories and marginal return estimates approximated from administrative data, without custom modeling infrastructure
If evaluation is about informing constrained choices, visualizations that suppress the cost of those choices work against the enterprise. This post sketches the argument; a full treatment, including a formal evaluation production function and design framework, is in development. Reaction and pushback welcome.
For evaluation-informed decisions to improve allocation, the foregone alternative must be visible. Making it visible is not an enhancement. It is foundational, without it, evaluation visualizations inform decisions without revealing their consequences.
The American Evaluation Association is hosting Data Visualization and Reporting (DVR) Week with our colleagues in the DVR Topical Interest Group. The contributions all this week to AEA365 come from DVR TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.