Wednesday, October 1, 2025
Dr. Karen T. Jackson, President, American Evaluation Association
“The tsunami is quickly destroying what we used to stand on.”
These words, scribbled in a private moment of reflection, are not hyperbole. They are a truth many of us in the evaluation field have been reluctant to name: the foundation beneath our profession—evidence-based practice, independence, ethics, and public trust—is eroding. And now, we must face the storm head-on.
On August 2, 2025, the U.S. administration abruptly terminated the Commissioner of the Bureau of Labor Statistics. The charge? Releasing “rigged” data that was politically inconvenient. The act was more than a personnel change—it was a warning. It signaled a growing disdain for independent evidence, a trend that is now codified in a series of Executive Orders that have fundamentally altered the federal ecosystem for evaluation.
At the American Evaluation Association (AEA), we recognize this for what it is: a watershed moment—not just for evaluation, but for democracy itself. Our expectations as a field have been upended. Our values—transparency, inclusion, ethical stewardship—are increasingly at odds with the prevailing winds of policymaking. What once motivated us to evaluate with courage and care now feels inadequate in the face of weaponized misinformation and policy volatility.
This year alone, federal actions have created unprecedented obstacles for evaluators:
Earlier this year, AEA conducted its Phase I Impact Survey. The findings were as devastating as they were predictable:
And yet, amid this chaos, many also spoke of resilience:
“I’ve had to pivot quickly. It’s been hard but rewarding to find creative ways to stay engaged.”
“I still believe we’re better together.”
These are not just sentiments—they are signals of hope and direction.
Let’s be clear: this is not just an evaluation problem. It is a democracy problem. When truth becomes partisan and expertise is dismissed as political, we all suffer—students, workers, veterans, children, families.
Evaluation is more than metrics. It is meaning. It tells us whether a policy or program is ethical, effective, and equitable. When evaluation is silenced, the consequences are not just academic. They are human.
Some organizations have chosen to adapt quietly, focusing on opportunities in AI and big data. That is their path. But at AEA, we choose to name the difference between data policy and evaluation policy. Evaluation is not just about collection. It is about interpretation, judgment, and accountability.
This is why we are issuing a call to the higher education and learned society communities, to philanthropy, to public administrators, and to our sister associations: join us in preparing for what comes next—what we call the Morning After Coalition.
The Evidence Act of 2018 remains on the books. But its scaffolding is being dismantled. If we want to restore its promise, we need to start now.
We cannot do this alone. If you are part of a university program, a philanthropic foundation, a federal agency, a civic nonprofit—please reach out. We need your agendas, your ideas, your collaboration.
Let us begin, not by rebuilding what once was, but by imagining what can be.
Let’s reestablish evaluation as an essential good—one that strengthens democracy, serves the public interest, and protects truth.
We recognize that not all in our field agree. Some argue that as a nonpartisan professional association, AEA should remain neutral, adapt to the administration’s agenda, and avoid the public fray. Others ask, “Where is the battle plan?”
We believe both perspectives deserve space. But neutrality is not silence. And silence is not safety. We choose to act.
This is not the end of evaluation—but it may be the end of how we used to practice it. We are not going back. We are going forward—together.
“The future is ours to fight for and win.”
And we invite you to join us.
By Anisha Lewis, Executive Director, AEA
The American Evaluation Association (AEA) is excited to welcome you to the 2025 Annual Conference in Kansas City, Missouri, November 10-14, 2025. Guided by member feedback and our commitment to innovation, we have made meaningful changes to enrich your conference experience.
You can look forward to a Kansas City welcome, where our host city will be featured throughout the week. From local attractions to community engagement opportunities, the conference will reflect the spirit and culture of Kansas City.
We recognize that these are challenging times for so many, so I f you are not able to join us in person, we hope that you can participate virtually to access the plenaries and Presidential Strand sessions. You can also look forward to conference-level sessions presented virtually over the next several months (details forthcoming).
New Categorization of Sessions Breakout sessions will be categorized by the Evaluator Competencies, in addition to Topical Interest Groups.
Advocacy Training Don’t miss our workshop to support member training in public engagement to advocate for evaluation (Training will also be available virtually at a later date, TBA).
Searching for a Job?
Visit the AEA Hub to Sharpen your Skills (resume reviews, networking support) and learn about our online Career Center and view job postings.
Onsite Feedback
More Interactive Learning In addition to traditional panels and papers, the program now includes hands-on labs, design sprints, and collaborative workshops. These formats are designed to deepen learning and spark creative solutions to the challenges faced by evaluators. Activities include:
Expanded Virtual Access We are making the conference more accessible to members near and far. The Plenaries and Presidential Strand Sessions will be available virtually (livestream) so you can learn and engage wherever you are. Register at https://www.evaluationconference.org/Registration/Registration-2025
Reimagined Networking Whether you are new to the field or a longtime member, you will find new spaces to connect with peers, such as themed meetups and morning coffee conversations, designed to help you forge lasting professional connections.
Lunch in the Exhibit Hall Boxed lunches of sandwiches and salads for sale (pre-sale only via online registration) on Thursday and Friday
We look forward to your participation in our Evaluation25 conference, either physically or virtually!
Sincerely, Anisha Lewis Executive Director
On Tuesday, September 16, AEA, in collaboration with Washington Evaluators, hosted The Evolution of Evaluation: Meeting the Moment. The panel featured Katherine Dawes, Rodney Hopson, Mark Schneider, and Robert Shea—experienced voices from across sectors—who explored how evaluators can navigate today’s period of significant transition with resilience, creativity, and renewed purpose. Panelists highlighted innovative approaches to practice, the evolving role of evaluators within systems and communities, and new ways to demonstrate impact in complex environments.
To capture the conversation for colleagues unable to attend, AEA invited two emerging leaders to share their reflections. Their recap offers a fresh perspective on the dialogue, viewed through their diverse and unique lenses. The author’s perspectives are their own and not representative of any external entities or organizations.
Cecilia Vaughn-Guy is a doctoral candidate in Education Policy and Organizational Leadership at the University of Illinois Urbana-Champaign with dual concentrations in Human Resource Development and Diversity and Equity in Education. She completed a graduate certificate in evaluation through the Evaluation Program within the Quantitative and Qualitative Methodology, Measurement, and Evaluation (QUERIES) division of Educational Psychology, and is a graduate of the American Evaluation Association’s Graduate Diversity Internship Program (GEDI). Her research interest centers on amplifying the voices of Black women who frontline healthcare workers using an intersectional lens, reimagining equitable and sustainable organizational hierarchies in healthcare organizations, and creating and sustaining cultural change through participatory evaluation.
Nate Varnell is a Policy and Research Analyst at the Data Foundation’s Center for Evidence Capacity. He graduated from The George Washington University’s Trachtenberg School of Public Policy & Public Administration with a Master of Public Administration, concentrating in Program Evaluation & Policy Analysis. In his work, he is dedicated to promoting the role of evaluation and evidence-informed policymaking in government, with an emphasis on federal science and public health programs.
On the left, you'll find responses from Cecilia Vaughn-Guy, M.S. OT, and on the right are the responses from Nathan Varnell.
This piece is intended to be conversational, so I will start it conversationally. The format of this piece was chosen to ask readers to hold a mirror to their idea of how they think scholars and scholarship shows up. I’m a midwesterner, and I believe in greetings and introductions. My name is Cecilia Vaughn-Guy, I’m a Black woman, sister, daughter, friend, researcher, evaluator and doctoral candidate from Cleveland, Ohio. I’ve lived all over the country, and folk who know me might read this in my accent. Those who don’t should know I have drawl and have been confused with a southerner. I have been adopted by people I love from Decatur, GA (nothing greater) and I consider Atlanta and some other areas there home. I have a wanderlust spirit, so I’m also good for an impromptu road trip that keeps me away from home for months at a time, as my roommate would attest. My personality is taller than my 5’4 height (and the heels I prefer to wear help me get there, too).
The event that inspired this piece was planned with dialogue in mind, both between panelists and among participants, and I’ve seen plenty of that dialogue carry on in digital spaces since the event. I believe it’s appropriate to carry on the conversation in a format like this — one that won’t be lost to the frenetic algorithm moving on. I’m a southerner that always had the importance of manners drilled into me, so I’ll follow Cecilia’s lead. My name is Nate Varnell. Born in Atlanta but a Texas Aggie by fortune, I’m a white man, son, brother, lifelong learner, researcher, and evaluator living in Washington, D.C. I often feel I’m always moving and rarely say no to something new, so I’m excited to be exploring how our community can respond to this moment.
Even as I’m savoring the last lap of my Ph.D. journey, I am thinking about what the next options are. What I thought was next, which was entering the federal evaluation or policy space, is a dream that died when the president took office. I am a critical scholar that studies Black women working in healthcare with a Black feminist lens. Once the National Science Foundation sent out the list of words they would scour grants applications with, I know that route would likely not be my best path forward.
I’ve had the privilege of being welcomed into this field by veteran scholars and practitioners, who drew me in while I was completing my master’s degree. Now, only a couple years of working in federal evaluation and evidence-informed policymaking and the expectations I had are also totally up in smoke. I did not expect that my brief tenure at the Government Accountability Office would likely be my only opportunity to practice evaluation in the federal government for the coming years. My first years as an early career researcher and evaluator has been a cycle of checking my expectations for what the future holds and course-correcting my goals as a working professional. Sometimes my idea of the future includes a clear career path that I’d love to chase. Sometimes it’s just looking for how to keep my head above water.
I came to the panel because this conversation about the next steps forward has been happening in various spaces and in various ways since November. As an evaluator, I wanted to see how the conversation was continuing to evolve. I wanted answers to the questions I couldn’t answer myself. I knew that at least one of the panel members had a clear idea of what next steps could look like. I also wanted a chance to connect with people I met through the Graduate Education Diversity Internship (GEDI) program, like Zachary Greys, Kutia Swinney, Anisha Lewis, and Esther Nolton. I also recognized the opportunity to network.
I came to the panel knowing that many familiar faces in federal evaluation would be in attendance or panelists, whom I wanted to support and hear from. Since November, I’ve been a part of numerous meetings about organizational, community, and individual responses to the blitz of administrative actions and policies. The uncertainties of those conversations — how we navigate both standing for our values and remaining relevant to policymaking — has led me to be careful with my own words, and this panel presented an opportunity for an honest dialogue.
I admittedly wrote a fairly neutral piece. I had some strong opinions about the panel, and as a student of the Center for Culturally Responsive Evaluation (CREA) at the University of Illinois, I also knew that the assignment I was given (which was to give an overview) was not the space to do it. I looked at the notes I took and tried to pull out the most salient take home points, so I could highlight them. I fully recognize that the things that were salient to me were salient partly because of the worldview I have cultivated based on the identities I hold, so I could not keep myself out of my writing, but I tried to be intentional about not making my opinion fact.
I also wrote with those who couldn’t join us in mind, taking notes on pretty much everything to divine what I took away from it later. When I attend events like these, especially now, I am looking for springboards that can carry me into my own further reading, research, and exploration. Rodney’s reference to Eleanor Chelimsky's 1995 address and the entire panel’s callbacks to prior administrations were those hooks for learning, which I hoped to share out for others. Regardless of my personal opinions on the panel’s content, I carry with me the sense that we need to be forward-looking. Ruminations on the challenges of the moment can be emotionally cathartic, but in my own experience haven’t led me anywhere useful. Often, rumination has taken me farther away from any solutions than from where I started.
I loved that the panel had different perspectives. I was actually pleasantly surprised to learn we weren’t having an echo-chamber conversation and that there were other people in the room who also had strong feelings. The panel created room for a brave space for the speakers and the attendees. I love that we got clear calls to action but that I also left with my own questions about how to be effective when someone has a point of view that challenges you. Robert Shea asked thought provoking questions. Mark Schneider acknowledged his republican sympathies, noting that he thought DOGE did a good job in cleaning house in the federal government but they could do better about sharing the plan of why. I don’t think there is a plan. Katherine Dawes and Rodney Hopson gave me a masterclass in how to respond to a point of view you don’t agree with. People who know me and have worked with me in professional capacities would tell you that I am nothing if not passionate, and sometimes my Cleveland comes out unbidden… if you know, you know…
I had to sit with myself during the panel, to say I was very introspective. Could I be open to another point of view? Could I take the time to hear what they said? How much understanding would mean I was ceding the power I have as a Black woman? How could I use an interest convergence perspective to get what I want? I got things I didn’t even know I needed from being present in the room and listening to the questions people asked.
Oh yeah, you could’ve cut the tension with a knife many times during the panel. It’s one thing for the current moment to be challenging us, it’s another experience for the community to have commonly held narratives about this moment (and more) to be challenged in person. To that, Katherine and Rodney excelled at pushing back with their own arguments, but allowing room for Mark’s position to be heard and taken seriously, not dismissed out of hand. I could’ve written another whole piece just on those conversational dynamics.
Robert asked what I believe to be one of the most important questions of the evening, which I’ll paraphrase. The “evidence movement” in government has worked hard to build scaffolding in law and policies to ensure evaluation is an essential function of government. At a time of reduced federal capacity and a new political lens for what constitutes “evidence,” how do we sustain demand for evaluation and evidence use, keeping it from being a compliance exercise? As valuable as the panelists’ responses were, I don’t think there was enough time given to this question, and it’s one we will need to keep grappling with in many events and dialogues to come. When I circulated among the audience after the panel it came up repeatedly in different ways: what do we do with an administration that is critical of scientific consensus and in an era when political victories are often reliant on popular perceptions and not real policy outcomes based on evidence? Can we engage in conversation about it, and if so, how? How do we reach individuals who share Mark’s view that DOGE’s goals to shake up the federal bureaucracy were good, albeit poorly executed?
I will say that I wanted to see some more explicit discussion of positionality by the panel members. I felt that Mark Schneider opened the door to it. Still, I understand it must have taken incredible restraint to keep the conversation civil and cute, or I should say, I know it would have for me.
I wish that panel could have been broadcast and recorded. I know there were some technical difficulties, but I would love to be able to run it back. I also want to see more student and new-evaluator perspectives on panels. There is something to be said about a dialogue between the wisdom and knowledge of a more senior scholar and a newbie, word to the James Baldwin and Nikki Giovanni conversation.
I agree with Cecilia, there were certainly two distinct conversations competing within the one panel. On one hand, this discussion of what “evidence” means and communicating its value in policymaking now, and on the other a discussion of how identities and political positions are defining how we engage with evaluation. Something that stood out to me about the event is that most individuals in the room seemed to hold being an “evaluator” as part of their identity. That identity certainly carries a strong belief in the value of evaluation, which I do share. Yet, there are plenty of professionals that work in and around evaluation functions in organizations of all kinds that do not hold this identity, and possibly hold no strong beliefs on the value of evaluation either way.
Mark, in my opinion, did not appear to view himself as an evaluator. He emphasized how politicized the evaluations he saw during his tenure at the Institute of Education Sciences had been, and was able to carry a much more critical lens of evaluation practices. Whether evaluators agree with these criticisms or not, there is great value in being challenged — as I said in my recap piece. I would love to see more bridges being built with policy practitioners and public officials that don’t identify as evaluators, to see what we can learn from them and expand this profession’s tent.
Also, as I’m trying to navigate the future, especially a potential career in evaluation during a very difficult job market, I’m looking for the perspectives of fellow early career evaluators on how they’re weathering the moment. To contrast and converse on our experiences with more experienced scholars would be a great opportunity for community mentorship.
A warm welcome to our newest AEA members. We value your membership and look forward to providing you with the tools and resources that help ensure your professional growth and success year after year. We'd also like to recognize the members who have reached significant milestones, having been part of the AEA community for 5+, 10+, 20+, and 25+ years as of the past two months.
Congratulations to the 2025 AEA Awards recipients for their achievements, dedication, and contributions to the field of evaluation!
AEA Marcia Guttentag Promising New Evaluator Award
Dr. Cherie Avent
AEA Alva and Gunnar Myrdal Evaluation Practice Award
Leslie Goodyear
AEA Alva and Gunnar Myrdal Government Award
Dr. Toni Watt
AEA Robert Ingle Service Award
Sharon Rallis
AEA Research on Evaluation Award
Tarek Azzam
AEA Paul F. Lazarsfeld Evaluation Theory Award
Dr. Apollo M. Nkwake
View the Announcement
AJE Excellence in Reviewing Award
Kyle Cox, University of North Carolina at Charlotte Tatiana Bustos, RTI International Rana Gautum, University of North Georgia
The Best of Volume 45 Award
Navigating the Field While Black: A Critical Race Analysis of Peer and Elder Advice to and From Black Evaluators
By Cherie M. Avent, J.R. Moller, Adeyemo Adetogun, Brianna Hooks Singletary, and Ayesha S. Boyce
By Nathan Varnell, Consultant for the Evaluation Policy Task Force
As a hot summer comes to a close and with fall right around the corner, this month’s Policy Watch will be looking at the hottest topics in evaluation policy from August and September.
The federal government is facing a likely shutdown as October 1 approaches, with Congress in recess this week amid an ongoing appropriations impasse. The situation continues to develop rapidly, with implications for federal evaluation capacity and the broader community.
The House passed a "clean" continuing resolution that would fund the government through November 21, however the Senate has rejected both that measure and a Democratic counterproposal that offered short-term funding in exchange for limits on presidential authority to withhold money for programs approved by Congress. Senate Majority Leader Thune sent senators home with plans to return September 29 to vote again on the House bill favored by Republicans.
A potential shutdown would have immediate impacts on federal evaluation offices, data collection activities, and research programs across agencies. Historically, shutdowns have resulted in suspended contracts, delayed evaluation activities, and reduced access to federal data resources — all of which directly affect the evaluation community's capacity to conduct ongoing work.
The House Appropriations Committee has still been actively advancing legislation in response to the White House’s FY 2026 budget proposal despite the looming shutdown, including the recent Labor, Health and Human Services, Education, and Related Agencies (LHHS) Appropriations bill. Read COSSA’s analysis of the bill’s impacts for federal research and science agencies here.
President Trump fired Bureau of Labor Statistics (BLS) Commissioner Erika McEntarfer on August 1 after the release of a weaker-than-expected jobs report that greatly revised previous estimates. The president called the validity of the report’s figures into question without providing evidence. Trump has nominated E.J. Antoni to replace McEntarfer as BLS Commissioner. The firing has drawn bipartisan criticism from statisticians, economists, and lawmakers, and raised concerns over statistical independence in the federal government.
Executive Order 14332, "Improving Oversight of Federal Grantmaking," issued August 7, restructured how federal grants are to be administered and monitored. The order mandates that all discretionary grants undergo review by politically accountable senior appointees before approval, introducing an increased level of political control over funding decisions, including federally funded evaluations. The order explicitly prohibits federal funding for programs involving "racial preferences or other forms of racial discrimination," or programs that "deny the sex binary in humans," among other requirements.
The order also requires compliance in applications for grant awards “with administration policies, procedures, and guidance respecting Gold Standard Science,” referring to Executive Order 14303, previously reported on in the July Policy Watch. Federal science agencies have publicly posted their implementation plans in response to the Gold Standard Science (GSS) order. While each agency addressed the executive order differently, all plans include information on how agencies are currently complying with or plan to address the requirements outlined in guidance from the White House Office of Science and Technology Policy (OSTP) issued in June. You can find examples of the implementation plans released related to the federal science and research agencies at the links below:
AEA members have been active and engaged across a variety of timely events on evaluation policy and government.
AEA co-sponsored an event with the Association for Public Policy Analysis & Management (APPAM) as part of the Policy on the Rocks series at the Texas A&M Bush School in D.C. on September 9. The event, “AI in Public Policy,” explored how artificial intelligence is transforming the way research and evaluations are conducted and how public policy decisions are made.
The AEA hosted a lively panel discussion on the state of the field, “Evolution of Evaluation: Meeting the Moment,” on Tuesday, September 16. The panel, held in downtown D.C. in collaboration with Washington Evaluators, drew a packed house (despite pouring rain) and made for engaging discussion about how evaluators are navigating the moment and considering where to go from here. A detailed event recap is forthcoming.
The AEA’s Evaluation Policy Task Force (EPTF) also held its second virtual Town Hall on updates to AEA’s Roadmap for a More Effective Government, drawing on member feedback from the first Town Hall held in July. The EPTF dialogued with members about key areas of the update focused on topics such as AI in evaluation, data privacy, and open science.
The upcoming months will surely be busy with more news and events as the government navigates the shutdown negotiations. In the meantime, stay informed about changes to America’s data and evidence infrastructure with updates from the American Statistical Association, COSSA’s Washington Update, and the Data Foundation’s Evidence Capacity Pulse Report series.
The Evaluation Policy Task Force continues to monitor the impacts of federal policy decisions for the evaluation community. If you are aware of changes in government and the evaluation community that are impacting your work or the work of other evaluators, consider providing information to the Evaluation Policy Task Force via evaluationpolicy@eval.org.
It's official, more than 1,000+ evaluators have already registered to join us in Kansas City. With a few weeks left to register, you're invited to join us in exploring emerging trends in evaluation and advancing your skills alongside your professional peers through over 300+ sessions.
Register now and get ready for Evaluation 2025, happening November 10–14.
Register Now
Can't join us in person? Consider the virtual conference option, allowing you to livestream the plenary and presidential strand sessions.
From Equity to AI: Dive Into the Issues Shaping Evaluation Now
Join one of 15 workshops tackling today’s key evaluation themes—from equity and community engagement to data, AI, and trauma-informed practice. Add workshops to your main conference registration to further enrich your Evaluation 2025 experience.
Explore the Latest in Evaluation Literature and Research
Connect with leading voices in evaluation at this special reception—browse new publications, meet the authors behind them, and spark ideas for your own practice.
Big Ideas Take Flight: No Presentations, Just Conversations
Join these informal, small-group sessions to connect with peers, share insights, and explore big ideas in evaluation. No slides—just conversations that spark new connections.
It has come to our attention that some individuals have received spam emails falsely claiming to sell attendee lists for the AEA Evaluation Conference 2025. Please note that these messages are not authorized by AEA, as we do not share member or attendee contact information with third parties for sale. Any email offering to provide a conference registration list or attendee list for purchase is fraudulent.
If you receive such a message, we recommend deleting it immediately and not engaging with the sender. You may also wish to mark it as spam or phishing within your email program. We take the protection of your information seriously and are actively monitoring these types of scams.
What lies inside the “black box” of evaluation? This new collection of 19 articles explores methods and approaches that go beyond measuring impact to reveal how programs are implemented, operated, and sustained. AEA members can view the collection now, and beginning October 10 it will be open access for three weeks to share with colleagues across the field.
Read Now: Opening the Black Box
AEA and the NDE Editorial Team are excited to announce the launch of INVitE-Pub (Inclusive Voices in Evaluation Publishing), a new initiative designed to expand pathways for publishing in New Directions for Evaluation. INVitE-Pub provides mentorship, support, and guidance to help diverse and emerging voices share their perspectives in the field of evaluation.
Whether you are a first-time author or a seasoned evaluator, the INVitE-Pub team is here to welcome, support, and amplify your voice. Learn more about INVitE-Pub and how to get involved during a one-hour virtual launch event on October 8th from 3-4 p.m. ET. Register here.
Registrants will receive access to the INVitE-Pub Database, a database of scholars in the field of evaluation who are interested in becoming guest editors and/or authors for the journal and are in search of collaborators.
We would also like to introduce Dr. Lisette E. Torres-Gerald (she/her/ella), Project Manager of INVitE-Pub. She is a trained scientist and disabled scholar-activist who is a Senior Researcher at TERC, a math and science education non-profit in Cambridge, MA. Working remotely from her home in Nebraska, she brings her experience as a former writing center director, published author, project manager, and founding editor of an academic journal to the INVitE-Pub initiative.