Date: Thursday, July 31, 2025
Dear AEA Members,
As we step into the second half of 2025, I am filled with gratitude and pride for all we’ve accomplished together this year. The AEA Board has remained steadfast in its commitment to transparency, accountability, and strategic direction, and I want to take a moment to share several key updates.
In addition, the RED (Responding to Emerging Demands) Advisory Group—formed to help the Association better understand and respond to policy shifts and emerging issues—has made meaningful progress on its first charge: to gather and synthesize member experiences and impacts through listening and data collection. This work is informing board-level discussions and guiding our collective response to critical challenges facing our field. In the coming months, members can expect expanded opportunities to engage with the board as we move into broader stages of the RED team’s work, including outreach to allied organizations, documentation of impact, and strategic communication.
While the evaluation community continues to grapple with the far-reaching implications of federal decision-making since January, which has challenged our values, practices, and professional stability we hope that many of you will still be able to join us at our annual conference. Evaluation 2025: Engaging Communities, Sharing Leadership, is just around the corner. This year’s program promises to be transformative, timely, and rich with connection.
Thank you for your continued dedication to advancing evaluation. I look forward to seeing many of you in Kansas City!
With appreciation,
Karen Terrell Jackson, Ph.D. President, American Evaluation Association
Do you have a thought-provoking reading you'd love to discuss with fellow evaluators? EvalReaders is currently seeking volunteer facilitators and reading suggestions for our upcoming sessions in August, September, October, and beyond!
Readings are usually short—like a book chapter or article—and don’t have to be evaluation-specific. If something sparked your thinking or felt relevant to your practice, chances are others will find it just as valuable.
Interested in facilitating? Email Lauren.Dixon@Centerstone.org with your reading suggestion. Scheduling is flexible and based on facilitator availability.
The AEA Publishing Corner spotlights work published by our members. If you have a recent publication or professional accomplishment you would like to share, please submit it here.
Congrats to AEA Members Brian Yates and co-author Nadini Persaud for the publication of their book, Cost-Inclusive Evaluation: Planning It, Doing It, Using It, published by Guilford Press.
View Book
AEA is pleased to announce that Denise Baer, Ph.D., has joined us as our new Advocacy Consultant. A long-standing AEA member, Denise brings a wealth of experience in policy and legislative strategy. Her deep knowledge of the evaluation field and commitment to advancing evidence-informed policymaking will help strengthen AEA’s advocacy efforts and amplify the voice of evaluators nationwide. Stay tuned for opportunities to engage with Denise and support AEA’s growing policy initiatives.
"Evaluation helps us as a community solve problems and inform ourselves about 'what works.' Unlike grad school, where you are taught how to select problems based upon the method, evaluation flips this to apply the most rigorous methods based upon real-world problems. My passion is strengthening our community of practice and thought leadership to support good governance and link grassroots communities to their leaders." - Denise Baer, Ph.D.
Affiliations: Strategic Research Concepts & George Washington University | Degrees: BA, MA, Ph.D. | Years in Field: 30 | Joined AEA: 2010
Bio: Denise Baer has extensive experience in evaluation, methods, evaluation policy and public affairs both as a scholar and practitioner in the U.S. and globally as well as in association management, capacity-building and strategic planning. She is a graduate of the Institute of Management and a vetted evaluation expert for the UNDP Global Policy Network (GPN) ExpRes Roster and the ABA Center for Global Programs. Recently she was an evaluation and gender advisor for the Swedish International Development Agency (Sida) and USAID’s Africa Trade and Investment Program and was the Founding Director of the Center for International Private Enterprise (CIPE) Evaluation Department, one of the core National Endowment for Democracy (NED) institutes. Denise has provided research, governance and social science consulting for a variety of federal agencies (OJJDP, DOJ, USAID and the State Department) and nonprofit organizations, including international work for NDI, IFES, IWPR, and Sida, and worked for the Congressional Research Service and as a Hill staffer. Baer has over 25 years’ experience teaching graduate and undergraduate American politics, research methods and evaluation at major universities and in delivering adult professional training, public leadership and candidate training (e.g., National Governor’s Association, DOJ, OJJDP, Aspen Network of Development Entrepreneurs, National Political Congress of Black Women, National Women’s Political Caucus, Committee for the Study of the American Electorate among others). As a scholar, Denise is the author or co-author of three books and over a dozen scholarly articles using experimental, survey and qualitative data. Her current research includes completing work on a book Delivering Measurable Performance: Performance Evaluation Methods, Strategies and Tools for Policymaking and Public Management, under contract with SAGE Press, and political ethnographic research on policy groups active in public affairs.
Within AEA, Denise’s work since 2015 has focused on the Democracy, Rights & Governance TIG, where she now serves on the DRG Board, works with the African Evaluation Association (AfrEA), and is an active member of other TIGs and Washington Evaluators.
By Nathan Varnell, Consultant for the Evaluation Policy Task Force
June and July have been abuzz with news as the state of America’s evaluation and evidence infrastructure continues to evolve.
On July 4, President Trump signed into law the One Big Beautiful Bill Act (H.R. 1) with expansive policy changes for the country. The reconciliation package — a special legislative process used to make changes to existing laws and programs regarding spending, revenue, or the debt limit — contains several provisions relevant for the research community, including significant reductions to student loan programs and support for further government cost-cutting measures.
The fiscal year 2026 congressional appropriations process is also underway, with two House appropriations bills already advancing to the Senate from the Agriculture and Legislative Branch subcommittees. In positive news for the federal evaluation community, steep cuts to the Government Accountability Office’s budget proposed by the House were rejected by the Senate. The EPTF will continue to monitor the outcomes of H.R. 1 and the ongoing appropriations cycle.
Further coverage on the appropriations cycle and how the One Big Beautiful Bill Act may impact federal scientific communities is available from the Consortium of Social Science Associations (COSSA).
Following a July 8 decision by the Supreme Court to overturn a lower court injunction, the administration has restarted implementation of its Agency Reduction-in-Force (RIF) and Reorganization Plans (ARRPs), which were previously reported on in the May edition of Policy Watch. The ARRPs could impact data and evaluation units in many agencies, although the extent is unclear at this time and litigation is ongoing.
The Office of Management and Budget (OMB) is also restructuring by eliminating the Evidence Team established in 2010 to consolidate performance and evaluation functions and government-wide leadership, as first reported in the Data Foundation’s June Evidence Capacity Pulse Report. The implications of this decision on implementation of the Foundations for Evidence-Based Policymaking Act of 2018 (Evidence Act) and related guidance is still unfolding. Likewise, the White House announced an extension of the governmentwide hiring freeze until October 15, barring agencies from filling vacant positions or creating new job postings. When the freeze is lifted, agencies will be limited to one new hire for every four vacant federal employee positions, likely limiting the government’s capacity to expand evaluation teams.
However, amid ongoing governmentwide restructuring, the infrastructure created by the Evidence Act remains active in many agencies. Although previous editions of Policy Watch reported vacancies in Chief Evaluation Officer posts, agencies are gradually appointing new full-time officials and vacancies are decreasing. Likewise, at least 9 of the 24 Evidence Act agencies have published annual evaluation plans for FY 2026, signaling that institutional evaluation policies and practices are enduring. The Office of Personnel Management (OPM) also released a Program Evaluation Career Path Guide to help federal agencies implement requirements of the Evidence Act. The guide provides agencies with guidance on creating or enhancing career paths for program evaluators, including job progression processes, career success factors, and individual career stage components with competencies, developmental experiences, and training options.
Finally, the White House Office of Science and Technology Policy issued guidance to federal agencies on June 23 on incorporating Executive Order 14303, “Restoring Gold Standard Science,” into their research activities. Agencies will submit implementation plans by August 22, with the notable requirement that agencies explain how they will develop standardized metrics and evaluation functions to assess adherence to tenets of “Gold Standard Science” and demonstrate their impact on scientific quality.
The upcoming months through the end of the federal fiscal year are expected to be busy with more news and events. Below are several events and resources for AEA members to consider:
The Evaluation Policy Task Force continues to monitor the impacts of federal policy decisions for the evaluation community. If you are aware of changes in government and the evaluation community that are impacting your work or the work of other evaluators, consider providing information to the Evaluation Policy Task Force via evaluationpolicy@eval.org.
Tuesday, September 9 | 5:00 PM - 6:45 PM ET | Texas A&M Bush School DC
Hosted by APPAM and sponsored by AEA
Join fellow researchers and practitioners for drinks, networking, and a lively discussion on how AI is reshaping public policy. Hear from experts on the promise and pitfalls of AI in governance—from ethical concerns to smarter infrastructure—and stay for a Q&A and reception.
Register for Free
Join evaluators from around the globe this November 10–14 in Kansas City, MO, for Evaluation 2025—AEA’s flagship conference.
This year’s event, themed Engaging Communities, Sharing Leadership, will feature 300+ sessions, hands-on workshops, and meaningful opportunities to connect with peers and advance your evaluation practice.
Register by Tuesday, September 9, to save with early bird pricing!
Register Now
In June, EvalPartners unveiled the Global Evaluation Agenda 2.0 (GEA 2.0): EvalAgenda for Future-Fit Evaluation. The Agenda advocates for a collective commitment to advancing the profession and is relevant to all evaluators. It is also a practical source document and includes starting points for action. It covers four dimensions: Enabling Environment for Evaluation, Institutional and Organizational Capacities, Individual Capabilities, and Key Catalytic Actions and Synergies. In introducing the new agenda, the report highlights 10 evolutionary “transitions” in evaluation: ranging from the progression of evaluation from a capacity to a discipline, to the progression from methodological and technical criteria to value-based criteria. The Agenda was launched with global evaluation leaders. Learn more about the Agenda and how you can be involved here. And, look out for an invitation from ICCE TIG to learn more about how you can be involved and advance your evaluation practice through GEA 2.0!
Registration is now open for the 2025 International Conference for Realist Research, Evaluation and Synthesis, taking place September 22–25 in Atlanta, Georgia, USA. This biennial convening brings together researchers, evaluators, practitioners, students, and commissioners from around the world to explore advances in realist methodologies.
With a theme of Developments in Realist Methodology, this year’s conference will feature sessions on realist economic evaluation, integrating realist approaches with other methods, and contextualizing realist methodologies across diverse settings. Whether you're a seasoned realist or new to the field, this event offers valuable opportunities to learn, connect, and collaborate.
Greetings, AEA members and AJE readers!
We are excited to share that, starting in August 2025 and rolling out roughly monthly for the remainder of our editorial leadership term, AJE will be making some Online Collections available. We aim to carefully curate each of our issues, and these Online Collections permit us to curate across issues. Please stay tuned for news about the following Online Collections:
Because our membership comes with full access to AJE's articles, you can always access the journal’s content in full. But, you may have colleagues or partners who are not members. Please consider sharing with them links to these Online Collections because our publisher, SAGE, will make the content of each new Online Collection freely available for the collection's first three weeks. Read, share, use, love, cite!
Signed,
Laura Peck & Rodney Hopson Your AJE Co-Editors-in-Chief
The latest issue of New Directions for Evaluation, the official quarterly sourcebook of the American Evaluation Association, is now available online.
Read the current issue, Incorporating Open Science Into Evaluation, to explore
Also, access the NDE online archive to reference past issues.
Learn More
Wednesday, August 13 & 20 | 1:00 p.m. ET | eStudy
This eStudy explores the transformative role of Artificial Intelligence (AI) in research data analysis. Across two sessions, participants will gain foundational knowledge and hands-on experience in utilizing AI tools for qualitative and quantitative data analysis, as well as an understanding of the ethical considerations associated with AI in research.
September 16 & 30 | 12:30 p.m. ET | eStudy
Are you and your organization struggling to cohesively measure impact across your portfolio of projects in a streamlined and comprehensive way? Is your work informed by trust-based values, systems change, and complexity? If this sounds like you, this e-study surely will NOT have all the answers, but it WILL offer some fresh approaches to measure systems change across diverse portfolios or bodies of work, while also not overly burdening grantees. Join us for an interactive e-study, focused on: (1) creating shared language around the challenges of measuring systems changes across disparate portfolios, (2) demonstrating strategies for portfolio-wide system change measurement, and (3) providing attendees with facilitated opportunities to apply the e-study content to their work, in real time. Designed as an interactive workshop, attendees will leave this session with tangible next steps to take back to their work.
Thursday, September 23 | 2:00 p.m. ET | Webinar
Evaluation and futures studies are two disciplines that have much to offer one another. Evaluation can inform the quality and effectiveness of foresight initiatives, such as whether an environmental scanning system surfaces signals of change that support organizational long-term thinking and preparedness. Systematically thinking about the future can free evaluation from being a primarily hindsight-based discipline and activity, and enable it to play a more active role in informing strategy and decision-making that benefits evaluators, clients, and the program. To realize these benefits, evaluators need to reskill and broaden their gaze beyond ex-poste and ex-ante evaluation. Futures studies constructs, such as anticipation, decolonizing the future, future generations, futures consciousness, and temporal models of change, require additional education. Workshops in methods, such as the futures wheel, alternative scenarios, and scanning for weak signals, will support evaluators “…moving beyond assumptions based on the past, accept the complexity of the situation, and create alternatives.”(Fred Carden, pg.5). In this webinar, Annette L Gardner, Thomas Kelly, and Eric Barela describe, discuss, and demonstrate constructs, definitions, and methods that can get evaluators on their ‘foresight journey.’