Date: Friday, May 15, 2026
Hi Everyone, my name is Najat Elgeberi, and I work as an evaluation specialist at the University of Nevada, Reno Extension. In this post, I highlight several mistakes evaluators should avoid when using graphs. When I first began analyzing data and turning findings into visuals, I assumed my job was to make charts look impressive. I used bright colors, crowded legends, bold borders, and whatever chart style the software offered by default. At the time, I thought that if a graph looked polished, it would also look professional. Over time, I learned almost the opposite: the more decoration I added, the harder I made the graph for people to read. A good graph is not the one with the most design features. It is the one that helps readers understand data quickly, accurately, and confidently.
One of my earliest mistakes was using too many colors. I assigned a different color to every bar or category because it seemed like a way to add clarity. In reality, I was adding work for the audience. Readers had to keep moving between the graph and the legend, trying to remember what each color meant. Eventually, I realized that color should be used carefully and with purpose. Now I try to use a restrained palette, keep most elements quiet, and highlight only the one or two points that deserve attention. That small change made my graphs feel calmer, but more importantly, it made the message easier to find.
Another mistake was confusing visual excitement with effective communication. I sometimes used decorative charts, including 3D bars, heavy outlines, dark backgrounds, and other embellishments that made the display feel dramatic. The problem was that these choices competed with the data. They made it harder to compare values and easier to become distracted by the design. Learning the basic principles of data visualization helped me understand that clean 2D graphs, light gridlines, and clear labels are not boring. They respect the reader’s time. Once I stopped trying to make charts look flashy, I was finally able to make them useful.
A third lesson was realizing that a graph is not successful if only some people can read it well. Early in my work, I relied too much on red and green or on color alone to distinguish categories. Later, I learned how difficult that can be for readers with color-vision deficiency. I also learned that a graph should still make sense when colors are muted, printed in grayscale, or viewed in less-than-ideal conditions. Now I use color-blind-safe palettes, direct labels, line styles, and other cues that do not depend on hue alone. This improved both accessibility and clarity.
Looking back, I do not think these mistakes happened because I did not care about quality. I think they happened because many of us learn software before we learn design principles. We discover how to create charts long before we understand how people actually read them. My work improved when I started asking different questions: What do I want the audience to notice first? What comparison matters most? Would this still work for someone reading quickly, printing in black and white, or struggling to distinguish color?
My biggest lesson is that data visualization is not about decoration. It is about judgment. It is about choosing forms, colors, labels, and emphasis in ways that support understanding. I still revise graphs often, but now revision feels purposeful. I am not adding more. I am removing what gets in the way. In my experience, that is when a graph begins to do what it is supposed to do.
The American Evaluation Association is hosting Data Visualization and Reporting (DVR) Week with our colleagues in the DVR Topical Interest Group. The contributions all this week to AEA365 come from DVR TIG members. Do you have questions, concerns, kudos, or content to extend this AEA365 contribution? Please add them in the comments section for this post on the AEA365 webpage so that we may enrich our community of practice. Would you like to submit an AEA365 Tip? Please send a note of interest to AEA365@eval.org. AEA365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators. The views and opinions expressed on the AEA365 blog are solely those of the original authors and other contributors. These views and opinions do not necessarily represent those of the American Evaluation Association, and/or any/all contributors to this site.