Date: Thursday, May 14, 2026
Hi Everyone, my name is Najat Elgeberi, Ph.D., and I work as an evaluation specialist and assistant professor for program evaluation at the University of Nevada Reno, Extension.
When evaluators create graphs, we often focus on the “big” decisions first: whether to use a bar chart, line chart, scatterplot, or dashboard. One of the most powerful design decisions is sometimes an afterthought: color. Color can either clarify or distort a visual message. Thoughtful color choices help readers identify patterns, separate groups, and understand magnitude. Poor color choices, by contrast, can make a graph harder to read, direct attention to the wrong places, or exclude readers with color-vision deficiency.
Color should always have a job. If every bar, line, or point is assigned a different hue simply because the software made that easy, the graph may become more decorative than informative. Claus Wilke argues that color should make a figure easier to read, not create a visual puzzle. This is especially important when we display categorical groups. Once a graph uses too many colors, readers must constantly move back and forth between the data and the legend, trying to decode which color belongs to which category. That effort increases cognitive load and reduces comprehension. In evaluation reporting, that can mean stakeholders spend more time decoding the display than understanding the finding.
Color also shapes how people interpret order and magnitude. This matters most in heat maps, choropleth maps, and other displays that encode numeric values with a gradient. Not all gradients are equally interpretable. Research on color integrity in visualization emphasizes the importance of perceptually uniform palettes, in which neighboring colors change at a visually even rate. When color gradients are uneven, viewers may perceive abrupt differences where none exist or miss meaningful variation that should stand out. A related problem appears in the familiar rainbow palette. Because its lightness changes non-monotonically, the rainbow scale can highlight arbitrary parts of the data and obscure the true ordering of values. In other words, the palette can introduce a story the data did not intend to tell.
The issue is not only accuracy, but also accessibility. A graph is not successful if only some readers can interpret it correctly. A substantial share of the population has some form of color-vision deficiency, and red-green distinctions are a common challenge. If a graph relies only on red versus green to separate categories or show good versus bad performance, some readers may not be able to distinguish those signals reliably. Crameri and Hason recommend using color-blind friendly palettes and checking figures in grayscale to ensure that categories or values still differ in relative lightness. If the graph becomes unreadable, it is a warning sign that the design is doing too much work through hue alone. That is a practice evaluators can adopt immediately.
For evaluation, the takeaway is straightforward: color should support meaning, not compete with it. Use a small number of colors for categorical comparisons. Reserve bright, saturated colors for deliberate emphasis rather than for every element. Choose ordered palettes with steady lightness changes when representing magnitude. When possible, reinforce color with direct labels, patterns, or position so that the graph remains interpretable even without perfect color discrimination. These choices are not merely aesthetic. They shape who can understand our findings, how quickly they can interpret them, and whether the message they take away is the one the data actually support.
The next time you revise a chart, ask the simple question: What is this color helping my reader see? If the answer is not clear, the color choice probably needs another look.
The American Evaluation Association is hosting Data Visualization and Reporting (DVR) Week with our colleagues in the DVR Topical Interest Group. The contributions all this week to AEA365 come from DVR TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.