AEA Newsletter: March 2016

Message from the President - Connecting Communities of Practice 

From John Gargani, 2016 AEA President

RS_gargani_photo_large-13RT.jpgThe board and I have been busy over the past few months building connections with other professional communities that have a need for evaluation and often little familiarity with it. It’s an important initiative. As I like to say, we are part of the largest profession no one has ever heard of. So when AEA reaches out to the business, design, or other communities, we may seem to come out of nowhere. It takes care and time to build these new relationships. So much, in fact, that we’ve started calling ourselves “ambassadors for evaluation.”

Yet we have made amazing progress in a short period of time. AEA organized a panel of evaluators who presented at Harvard’s well-respected Social Enterprise Conference and organized a similar panel and a keynote for Wharton’s Social Impact Conference in San Francisco. From what I have been told, this is the first time evaluators have been part of these events. Thanks to AEA members Stewart Donaldson (past president of AEA), Robin Miller (past editor of AJE), and Veronica Olazabal (Rockefeller Foundation), the presentations were a huge success.

Pres Msg photo - RS.pngBy participating in events like these, we are able to connect AEA’s values, knowledge base, and professional network with like-minded professionals who are unfamiliar with evaluation. The Harvard and Wharton events support a community of professionals dedicated to using private capital to promote the public good. They include impact investors, social entrepreneurs, and socially-minded corporations. These efforts are growing rapidly, have the potential to have substantial impact on the world, and depend on evaluation for their success.

Evaluation is so important to this community that AEA and a growing list of partners (including Social Value International and the Evaluation Unit of the Rockefeller Foundation) are organizing a new event on the topic of measuring impact in this context.

I can’t announce it now (but we are calling it Impact Convergence 2016) or where it will take place (The Carter Center in Atlanta, Georgia) or when (October 24-26, just before AEA’s annual conference, also in Atlanta). Nor can I discuss what we will do (connect the vision of thought leaders to an inclusive, design-based approach to setting an agenda for advancing the state of the art of impact measurement in this context). Nor can I say how the partners (foundations, associations, and others) will participate in Evaluation 2016 to gather input on the agenda and provide opportunities for professional networking.

If I could tell you, I believe you would think it is pretty cool. And you would be proud that AEA is leading the effort.

Don’t worry, I’ll be sure to let you know when I can announce Impact Convergence 2016, which conveniently will take place at the Carter Center, immediately before our annual conference.

I hope you will join me there, as an ambassador for evaluation, and help AEA build an inclusive community for change.

 

Diversity - Space for Culturally Responsive Evaluation at Evaluation 2015

From AEA Graduate Education Diversity Internship Program Scholars, 2015-2016 (Thana-Ashley Charles, Dominic Combs, Dani Gorman, Agustin Herrera, Marques Hogans, Monique Liston, Nancy Mendoza, Ibukun Owoputi, Kenneth Pass, Leah Peoples, Jamie Vickery) 

The Graduate Education Diversity Internship (GEDI) Program works to engage and support students from groups traditionally under-represented in the field of evaluation. Each year, the talented scholars are tasked with completing a comprehensive service project that challenges them to successfully apply learnings acquired through their respective university programs, host site experiences, and through the GEDI internship experience itself. Take a look at what this year’s GEDI scholars have been working on since entering the program this fall.

This year, the AEA Graduate Education Diversity Internship (GEDI) program’s 13th cohort was tasked with exploring how cultural responsiveness presented itself during the Evaluation 2015 conference. Cultural responsiveness is, in fields external to evaluation, an effort to support decolonization of communities, racial equality, and social justice. Evaluators retooled the concept and practice to address the lack of sensitivity to issues of privilege and power relationships (Hood, Hopson & Kirkhart, 2015). AEA’s Cultural Competence Task Force worked for six years to develop an agreed upon understanding of cultural competence. These efforts were the direct result of a recommendation provided by AEA’s W. K. Kellogg Foundation-funded Building Diversity Initiative, which highlighted the need for evaluators to incorporate cultural context and diversity in evaluation practice (AEA, 2011). Culturally responsive evaluation (CRE) has grown in popularity within AEA and is integral to its organizational values (AEA, 2015). However, culture and the ways that evaluators address this construct are not presented in any standardized way. Our evaluation interrogated how Evaluation 2015 attendees interpret and use CRE in their practice.

We hoped to gain insight into the ways that evaluators are implementing CRE in their work. Our three guiding questions are: 

  • How do Evaluation 2015 participants define CRE?
  • How does CRE emerge in Evaluation 2015 participants’ work, if at all?
  • To what extent, if at all, do CRE-based conference presentations differ from one another?

We conducted a descriptive evaluation using participant observations and incorporated CRE-related items into the conference post-survey to understand how Evaluation 2015 participants defined and used CRE. We began with one exploratory focus group to inform and refine the observation and survey protocols. We used a standard observation guide to collect information while visiting 18 randomly selected conference panels and workshops. Approximately 350 attendees who responded to the post-conference survey completed CRE items using the available Likert scales.

Key Findings

  • Evaluation 2015 participants do not have a shared CRE definition.
  • Evaluators who self-identified as expert level (compared to a new or advanced expert) are less likely to identify CRE as important to their evaluation practice.
  • Observers (GEDI scholars) found evidence linking CRE-related presentation quality and presentation delivery using culturally responsive pedagogy.

Reflection and Next Steps

Contextual observations of attendees suggested that CRE is not a widespread practice or a welcome one. (We took particular interest in the attendance rate to the Presidential Strand exemplary culturally responsive sessions and encountered an open-ended “don’t care” response to a question of CRE’s importance to evaluators’ individual practice.) However, we have enough information to suggest that practitioners are aware of CRE (defined variously) and that those who identify as new evaluators or advanced evaluators tend to find the practice important in their work. Of use in future work examining the CRE definition among conference attendees will be (1) a larger response pool, (2) a set of pre-organized focus groups involving evaluators from various fields and practice levels, and (3) an opportunity to more strategically observe purposefully selected sessions to understand any impact of presentation style, room arrangement, presenter, and topic.

As our first collaborative project for the program, we learned a great deal from this fieldwork and appreciate the experience using unfamiliar research methods. We look forward to sharing the full details of our report at the AEA Summer Institute.

 

Potent Presentations Initiative - The Research Behind the Rhetoric

From Sheila B. Robinson, Potent Presentations Initiative Coordinator 

Sheila Robinson-RS 2.png

Evaluation 2016 proposal submitted: Check! Time to relax and just wait for that acceptance letter, right? Wrong! Potent Presenters know that to be successful, you must make time to study the art and science of presentations and then practice applying this learning to your own presentations. After all, your brilliant material and fabulous personality will only take you so far. But don’t take my word for it! Here’s the good news: The Potent Presentations Initiative (p2i) website hosts a wealth of research on key aspects of presenting.

Now is the perfect time to check out The Art and Science of a Successful Presentation for a review of some of the research behind the p2i principles you see reflected in the resources. You can read the report online or download the pdf. This report, in the form of an annotated bibliography, is organized around the following themes:

  1. Crafting a Strong Message
  2. Establishing Credibility
  3. Planned Informality
  4. Designed Interactivity
  5. Purposeful Delivery

p2i March 2016 image_RS.pngEach theme in the report features brief chunks of explanatory text, a series of steps to follow, concrete examples and strategies, and full references to the research studies reviewed. It’s easy to think that your time should be chiefly devoted to developing your content, strongly related to the first theme, crafting a strong message, but there is much more to learn! For example: How do you establish credibility? Hint: It’s not simply by revealing how many degrees you’ve earned, or the high-powered job titles you’ve had. How do you build in informality? Hint: You don’t have to become a world-class comedian.

Time reading this report and considering how to apply these themes to your presentation practice is time well spent. 

New p2i blog

The p2i blog will be up and running soon! Have you delivered a Potent Presentation? Have you successfully used a p2i resource to craft a presentation? Have you used other helpful resources that informed your message, design, or delivery? Interested in contributing to the p2i blog? Email me at p2i@eval.org with your ideas.

Pardon our dust …

We are in the process of updating and migrating the p2i site. We’re moving a few things around and are doing some tidying up. Don’t worry – all of your favorite p2i resources will still be available. Stay tuned for news and updates.

 

Policy Watch - FY 2017 Office of Management and Budget A-11 Circular 

From Cheryl Oros, Consultant to the Evaluation Policy Task Force (EPTF)

Cheryl Oros.jpg

 

The president is responsible for submitting an annual budget to Congress. The Office of Management and Budget (OMB) coordinates the budget preparation, providing procedural guidance through its Circular A-11.

The FY 2017 circular provides detailed discussions of the role of evaluation in the budget process. It describes evaluation approaches, evidence, data limitations, external factors, and alignment of evaluation and performance management. It also summarizes the requirements of the GPRA Modernization Act of 2010 and describes the Performance.gov website, a centralized source of government performance information.

Here are relevant excerpts and summaries from the Circular: 

  • Agency leaders are expected to consider the available evidence, including any available evaluation results, when analyzing progress toward goals. As appropriate, such analysis should consider whether the goals and indicators have been validated through research to be well-correlated with ultimate outcomes, implications of available research on the appropriateness of the measure, and whether the available research indicates that the use of the progress measure may encourage negative unintended consequences.
  • Chief operating officers, supported by performance improvement officers and research and evaluation officers, are responsible for establishing a performance and evidence culture within the agency that sets priorities and challenges for managers and employees at all levels of the organization to focus on better outcomes and lower-cost ways to operate. They should work to establish a culture of continual learning where staff identify critical questions and search for, test, and expand the use of effective practices.
  • Performance management and program evaluations should be aligned and complementary, where appropriate. Performance management tracks results on an ongoing basis to ensure efficiency. Evaluations are carried out periodically using rigorous designs and methodologies, particularly to estimate impacts and determine causality.
  • The “intended use” of evidence concept implies that high-stakes decisions should be based on a preponderance of evidence developed using sound methods when feasible. Some programs will need a high level of credibility and precision in the portfolio of evidence on which leaders base a decision. This may require multiple randomized, controlled trials assessing the effectiveness and safety of an approach within the portfolio of evidence. However, decisions about how to improve the outreach of a given program may not require the same level of precision or as large of a portfolio of evidence.
  • Evidence (the available body of facts or information indicating whether a belief or proposition is true or valid) can be quantitative or qualitative and may come from a variety of sources, including evaluations, performance measurement, and retrospective reviews. Evidence has varying degrees of credibility, and the strongest evidence generally comes from a portfolio of high-quality evidence rather than a single study.

Evaluations should use the most rigorous methods that are appropriate to the evaluation questions and feasible within budget and other constraints. Rigor is important for all types of evaluations. Impact evaluations require that (1) inferences about cause and effect are well-founded (internal validity); (2) there is clarity about the populations, settings, or circumstance to which results can be generalized (external validity); (3) measures accurately capture the intended information (measurement reliability and validity); (4) samples are large enough for meaningful inferences; and (5) evaluations are conducted with an appropriate level of independence by experts external to the program either inside or outside an agency.

 

International Policy Update - AEA Appoints Two Experts to New Global Network on Professionalization

From Mike Hendricks, AEA Representative to the International Organization for Cooperation in Evaluation (IOCE), with contributions from Jim Rugh, EvalPartners Co-Coordinator

Mike-Hendricks.jpg

     Jim Rugh 2010.01.16.JPG

AEA President John Gargani has just appointed not one but two of our very best experts to represent AEA on a new initiative from the International Organization for Cooperation in Evaluation (IOCE) on Professionalization. This is a strong signal that AEA believes this new initiative is vitally important.

Donna Podems.jpgDr. Donna Podems is founder and director of OtherWISE: Research and Evaluation, a small South African research and evaluation company. She currently serves on the AEA Board of Directors and previously served on the board of the South African M&E Association (SAMEA), where she lives. Donna has also consulted for various U.N., bilateral, and multilateral agencies, as well as nonprofits and foundations in more than 20 countries over the past 20 years.

Donna was the author of the South African Evaluation Competency Framework for Government, the lead author of the South African study on pathways for strengthening evaluators, for which some information can be found here, and the lead editor of a special edition of the “The Canadian Journal of Program Evaluation” on professionalizing evaluation around the globe. She is also a key member of an AEA task force to develop evaluator competencies for AEA members and has been reporting regularly to the AEA Board on the international discussions around professionalization.

John Lavelle.jpgDr. John LaVelle is an assistant professor at Louisiana State University, having received his Ph.D. in evaluation and applied research methods from Claremont Graduate University, where he studied with AEA past-president Stewart Donaldson, Michael Scriven, and others. While at Claremont, John also was the director of operations and external affairs at the School of Social Science, Policy, and Evaluation.

John’s academic interests focus specifically on professionalization in evaluation, and he has authored several peer-reviewed articles on just this topic. He is especially interested in the international job market and the formal university systems that are set up to meet those needs. He has explored topics such as analyzing the international job market for evaluators, the university systems across the world that prepare evaluators, and methods for reaching out to potential applied researchers and evaluators.

AEA, IOCE, and the global evaluation community all benefit from having such high-level persons involved in this new network. Our thanks to Donna and John for serving in this vitally important role.

Recent Stories
AEA Newsletter: October 2017

AEA Newsletter: September 2017

AEA Newsletter: August 2017