View the full schedule here.
Offered: Sunday, June 5, 9:00 a.m. – 4:30 p.m.
Presenters: Daniel Kidder, Linda Vo
This workshop will provide an overview of program evaluation for Summer Evaluation Institute participants with some, but not extensive, prior background in program evaluation. The foundations of this workshop will be organized around the Centers for Disease Control and Prevention’s (CDC) six-step Framework for Program Evaluation in Public Health as well as the four sets of evaluation standards from the Joint Commission on Evaluation Standards. The six steps constitute a comprehensive approach to evaluation. While its origins are in the public health sector, the Framework approach can guide any evaluation. The workshop will place particular emphasis on the early steps, including identification and engagement of stakeholders, creation of logic models, and selecting/focusing evaluation questions. Through case studies, participants will have the opportunity to apply the content and work through some of the trade-offs and challenges inherent in program evaluation in public health and human services.
Presenters: Stewart Donaldson
Every time we try something new, we ask ourselves, “Is it better than before? What makes it good? What is its value?” This process of valuing may be applied to anything from purchasing a computer, to judging the quality of a school curriculum, or an organization’s training program. The art and science of valuing is called Evaluation. All human beings evaluate, albeit informally, but the ability to evaluate systematically and accurately is important to our society and has the power to help improve individual lives and society as a whole. This course aims to introduce you to some of the most important concepts and methods that underpin the thriving profession of evaluation and introduce you to a wide variety of ways to use professional evaluation knowledge in your career. You will learn about fundamental evaluation topics and concepts in four areas: evaluation theory, evaluation methods, evaluation practice and research on evaluation practice.
Offered: Monday, June 6, 9:00 a.m. – 12:00 p.m.; Tuesday, June 7, 9:00 a.m. – 12:00 p.m.
Presenters: Ann W. Price, Susan M. Wolfe
This skill-development workshop is designed for evaluators who evaluate coalitions and community collaboratives. Through lecture, discussion, and exercises, this hands on, interactive skills development workshop will provide the foundations and tools needed to conduct evaluations of coalitions. Workshop presenters will review topics such as frameworks for evaluating coalitions; measures and tools; and challenges. Participants will participate in role play exercises that will highlight some of the challenges and issues. The presenters will share their experiences and provide case studies as a basis for discussion and to facilitate attendees’ ability to apply the material to the types of situations and settings they will encounter.
Intermediate: Attendees need some basic knowledge of the specific content area, but need not have in-depth knowledge or skills. The workshop will focus on knowledge or skills that build on the basics.
Attendees will learn:
Offered: Monday, June 6, 1:30 p.m. – 4:30 p.m.; Tuesday, June 7, 9:00 a.m. – 12:00 p.m.
Presenters: Rebecca Woodland, Becky Mazur
Logic modeling, the process through which tacit understandings of how explicit undertakings, such as interventions, programs, and networks, are theorized to bring about change are made explicit, is considered to be standard, and even best professional practice in the field of evaluation. Humans want to design, develop, and deliver services and systems that have value and worth. And evaluator facilitation of logic modeling can enable practitioners to make informed and inspired decisions about resource allocation, organizational design, program improvement, and intended desired outcomes. Yet, logic modeling gets a bad rap both by evaluation stakeholders (many of whom perceive/see it as a painfully transactional and overly linear discrete task, a tool that they need to ‘produce’ to satisfy some external entity), as well as evaluators (a great proportion of whom aren’t entirely clear on its merit, or are ill-equipped to facilitate its development and use.)
In this highly active and engaging workshop, attendees will delve into the myths and merits of logic modeling, and acquire specific strategies for the integration of logic modeling throughout the program evaluation cycle. Attnedees will receive the Logic of Logic Modeling (LLM) framework that makes explicit the precipitating needs, necessary resources, key activities, intended outputs and longer-term outcomes of logic modeling. We believe widespread evaluator use of logic modeling is central to the advancement of the mission of the AEA. If evaluators, write large, have competence and confidence in the enactment and management of logic modeling, then evaluation practices will improve, evaluation use will increase, evaluation as a profession will be promoted, and evaluation generally will be more likely to contribute to the generation of theory and knowledge about effective human action.
Offered: Monday, June 6, 1:30 p.m. – 4:30 p.m.; Tuesday, June 7, 1:30 p.m. – 4:30 p.m.
Presenters: Sharon Attipoe-Dorcoo, Ph.D., M.P.H., Norma Martínez-Rubin, M.P.H., M.B.A.
This workshop will offer participants a means to examine what being “culturally responsive” and “equitable” is in current evaluation practice. The workshop’s first half includes an overview of content derived from scholarly work on culturally responsive and equitable evaluation (CREE). Workshop facilitators will use CDC’s framework for program evaluation as a backdrop to acquaint participants with established evaluation planning and design concepts. In the second half of the workshop, an introduction to the I.M.P.A.C.T. framework originated by the workshop facilitators will launch further discussion on specific aspects of CREE. Each I.M.P.A.C.T. component encourages the evaluator to reflect on how to apply a CREE mindset for evaluative purposes.
• Inclusive: For whom am I working? How do community members engage in defining the “problem” from a strength-based perspective to attain equitable and culturally derived solutions?
• Manumit: The evaluator sets free their preconceptions and listens to community members in a co-creative process.
• Practice-based: Evaluators systematically identify assumptions and co-create practical recommendations for social change in a particular setting.
• Accessible: Evaluators present study findings, or results, in formats and at literacy levels that are comprehensible to community members and allies whose lives or ways of operating will be influenced by the evaluation.
• Community-focused: Evaluators’ philosophical and technical approaches demonstrate adaptability and the understanding that methodological rigor may not align with community needs.
• Timely: Evaluators are mindful of the urgency, and timeliness of acquiring evidence derived from community engagement to help rectify inequities in myriad social systems traversed by those communities.
Offered: Monday, June 6, 9:00 a.m. – 12:00 p.m.; Monday, June 6, 1:30 p.m. – 4:30 p.m.
Presenter: Rebeca Pop
Data is crucial to the success of any evaluator or researcher. It’s essential to get the right data, at the right time, in the right way. But the journey doesn’t stop there: once you have the data you need, it’s even more important to be able to communicate it effectively.
In this workshop, participants will learn fundamental tips and tricks to help them avoid common graphical mistakes and they'll have the chance to practice using Excel. This is a hands-on, interactive workshop that includes discussions, polls, exercises, and even a fun Kahoot competition.
Presenters: Anh Thu Hoang, William E. Pate II
To solve today’s most complex and pressing problems we as an evaluation community need to re-examine how we ‘do’ evaluations to render evidence-based solutions. As practitioners we have a duty to widen our purview as well as alter our practice to have more inclusive/unbiased perspectives, resulting in better guidance for sustainable change. How can this be done? A systems approach to evaluation can guide how evaluations are designed, conducted, and ultimately used. Evaluation needs to resolve global problems. Evaluation has a unique and important role demonstrating impact and shaping policy to affect change for social good.
This workshop will introduce a systems thinking approach to improve evaluation processes and results. Resolving complex social problems requires more than just good data, evidence, and adequate technology; it requires inclusive viewpoints involved in the evaluation design process, problem and goal setting, agreeing on success indicators, etc. Traditionally, the evaluation design process and subsequent decisions are made by the donor with negligible inputs from beneficiaries. Applying a systems perspective encourages widening the circle of stakeholders; the evaluation process and resulting evidence will necessarily shift.
Stakeholder engagement is a fundamental component in theories of change and logic models. As currently practiced it is largely nominal and treated as a checklist item to be done once. We believe that stakeholder engagement can be so much more. This interactive workshop is intended for evaluators and program managers of all levels interested in how to truly engage with stakeholders and measure their participation in the program/project cycle. The workshop will demonstrate how this approach leads to better outcomes with increased diversity, equity, and inclusion (empowerment).
Beginner: Attendees need no prior knowledge of the specific content area in order to participate fully and effectively in the workshop. The information or skills will be new for those who enroll.
Presenter: Ana Coghlan, PhD
A major purpose of many program evaluations is to generate data for decision making. However, how can we be sure that our data are of good enough quality to ensure well informed decisions? While evaluators may receive training in aspects of data quality, overarching ways to enhance and manage data quality are rarely addressed. In this workshop, evaluators will be introduced to a comprehensive data quality management (DQM) system, first developed by the Global Fund and several international development agencies, that consists of specific data quality assessment criteria and standard operating procedures. Through large and small group discussions, participants will first identify their own data quality issues. Participants will then review and relate their own experiences to certain assessment criteria and identify procedures for strengthening the quality of their data. Lastly, participants will review the basic components of a Data Quality Management Plan.
Presenters: Paul Elam, Mindelyn Anderson, Kristine Andrews
The field of evaluation is being challenged to utilize a process that considers who is being evaluated and who is conducting the evaluation. MPHI has worked to develop useful frameworks, tools, and approaches that evaluators could consider focusing on the ways that race and culture might influence an evaluation process; this has resulted in the development of a framework for conducting evaluation using a culturally responsive and racial equity lens. This workshop focuses on the practical use of a racial equity lens when conducting evaluation. The framework argues that culture and race are important considerations when conducting an evaluation because we believe that there are both critical and substantive nuances that are often missed, ignored, and/or misinterpreted when an evaluator is not aware of the culture of those being evaluated.
Participants will be provided with a Template for Analyzing Programs through a Culturally Responsive and Racial Equity Lens, designed to focus deliberately on an evaluation process that takes race, culture, equity, and community context into consideration. Presenters will also share a “How-to Process” focused on the cultural competencies of individuals conducting evaluations, how such competencies might be improved, and strategies for doing so. This “How-to Process” is the result of thinking around developing a self-assessment instrument for evaluators, is based primarily on the cultural-proficiencies literature, and relates specifically to components of the template. Participants will have the opportunity to engage in small-group exercises to apply the concepts contained in the template to real-world evaluation processes. Based on these experiences, participants will gain practical knowledge on the use of the lens.
Offered: Monday, June 6, 9:00 a.m. – 12:00 p.m.; Wednesday, June 8, 9:00 a.m. – 12:00 p.m.
Presenter: Matt Feldmann
As a program evaluator, are you thinking about going out on your own? For many, this is an exciting but intimidating prospect. This workshop will reveal the simple but essential design and start-up skills needed for success. Matt Feldmann will lead this important introductory workshop that has been foundational to the development of many consulting practices and small internal independent evaluation shops. These workshops will provide you with a plan for how to initiate your independent consulting practice.
Offered: Tuesday, June 7, 9:00 a.m. – 12:00 p.m.; Wednesday, June 8, 9:00 a.m. – 12:00 p.m.
Presenter: John LaVelle, Stewart Donaldson
This workshop is designed to provide practicing evaluators with an opportunity to improve their understanding of how to use theory to improve evaluation practice. Lecture, exercises, and discussions will help participants learn how to apply evaluation theories, social science theories, and stakeholder theories of change to improve the accuracy and usefulness of their evaluations. A range of examples from evaluation practice will be provided to illustrate main points and take-home messages.
Offered: Monday, June 6, 9:00 a.m. – 12:00 p.m.; Tuesday, June 7, 1:30 p.m. – 4:30 p.m. Please note the Tuesday time slot has moved from the morning.
Presenters: Dylan Felt, Esrea Perez-Bill, Gregory Phillips II
This advanced-level workshop will provide a theoretical, practical, and justice-focused approach to lesbian, gay, bisexual, and transgender (LGBTQ+) Evaluation. Through hands-on activities and collaborative inquiry, participants will develop advanced skills enabling them to confidently, competently, and collectively work towards LGBTQ+ inclusion and liberation in their practice. Principles of Equity, Justice, and Cultural Responsiveness within the field of evaluation form the basis of some of our profession’s most important work. Centering marginalized voices, experiences, and individuals in our practice is critical to ensuring we as evaluators are poised to best support our clients and partners. While a large body of theory and practice has grown around these principles, recent discourse has acknowledged the erstwhile exclusion of the LGBTQ+ community from most discussions of culturally responsive and transformative evaluation. Though sometimes mentioned, LGBTQ+ Evaluation has rarely been seriously considered or discussed by the field at large. Moreover, to date, no frameworks, models, or principles specific to LGBTQ+ Evaluation have been widely adopted and implemented.
Recent years have seen the beginning of a sea change in LGBTQ+ Evaluation, as this work has begun to be recognized for its importance within the field. The presenters of this workshop are particularly proud of having been at the forefront of much of the theoretical work which has helped to advance this conversation, including forthcoming keystone publications on the topic in both the American Journal of Evaluation and a special issue of New Directions of Evaluation for which we serve as Guest Editors. However, even with the growing availability of this body of theory, many evaluators may justifiably feel intimidated or uncertain of how to practically approach designing and implementing an LGBTQ+ Evaluation given that there is relatively little prior literature in the field to draw on, and no specific guidance available from professional organizations regarding practice with LGBTQ+ communities. This session draws on the momentum of the LGBTQ+ Evaluation theory-building process and its relevance to the field while simultaneously addressing gaps in specific standards or guidance by providing attendees with conceptual, methodological, and practical skills they will need to become effective LGBTQ+ evaluators.
“Theory, Practice, and Praxis for Liberatory LGBTQ+ Evaluation” is an advanced, project-based workshop aimed at providing attendees with hands-on, collaborative, collective opportunities to reflect and build upon LGBTQ+ liberation in their own practices. This workshop takes an inquiry-based approach to teaching LGBTQ+ evaluation through encouraging attendees to thoughtfully and critically interrogate what it means–in theory, practice, and praxis–to conduct evaluation with LGBTQ+ communities in a way that is culturally responsive, equitable, and transformative. Liberatory adult education theories and texts, such as pedagogy of the oppressed, will guide both curriculum and teaching throughout the session. Participants will be encouraged to practice creativity, and experiment with anti-oppressive practices.
This workshop offers attendees who are more familiar with the basics a hands-on opportunity for LGBTQ+ Evaluation skills building. The workshop will challenge attendees to not remain stagnant in inclusivity. This workshop is unique in that it presents a synthesis of novel theory in the LGBTQ+ Evaluation space. This year will see the publication of a framing paper in the American Journal of Evaluation as well as an LGBTQ+ Evaluation-focused special issue of New Directions for Evaluation, both led by our team. Therefore, this workshop represents an opportunity to take advantage of a unique and crucial moment in the process of making LGBTQ+ Evaluation mainstream.
The core of the workshop is a project-based inquiry in which participants consider what makes an evaluation LGBTQ+ inclusive and liberatory through practicing designing and implementing an LGBTQ+ Evaluation themselves. Participants will be divided into small groups and given a scenario of an evaluation including overall design, implementation and governance strategies, and data, which they must improve by drawing on the theories and principles introduced in the first activity. Scenarios will reflect real-world challenges and limitations, and participants will need to be creative in how they approach resolving complex issues. Rather than emphasizing a single “right answer,” these scenarios will push participants to use guiding theories and principles to inform decisions made with all of the constraints which evaluations face in the real world. Participants will be able to choose one aspect of an evaluation to focus on, and will select from a variety of prompts to respond to. Prompts intentionally push back on a narrow conception of evaluation by incorporating creative and arts-based activities which allow for greater choice and engagement among participants.
Advanced: Advanced: To participate fully, attendees must have a substantial working knowledge or skill level in the specific content area. Generally attendees currently use the knowledge or skills in their jobs. At this level, knowledge or advanced techniques are offered to refine and expand current expertise.
Offered: Tuesday, June 7, 1:30 p.m. – 4:30 p.m.; Wednesday, June 8, 9:00 a.m. – 12:00 p.m.
Presenters: JoAnna Hillman, Janelle Gowgiel, Christiana Reene
Instructors Janelle Gowgiel, JoAnna Hillman, and Christiana Reene are on a mission to free the world from crappy surveys. Through this workshop, we want you to become a “No More Crappy Surveys” ambassador. Whether you’ve only designed one survey or you have experience deploying many surveys, this workshop will provide a complete overview of surveys from planning to wrap up, with hands-on activities to put your learning into practice. You will learn effective survey practices, including starting with evaluation questions, developing proper questions, framing questions to gather useable data, and implementing survey logic (branching, skip, display). The workshop will help you identify common issues that derail survey effectiveness and help you avoid these pitfalls. Finally, a case study exercise will help you discover best practices in reporting, including engaging partners for informed decision making. Participants are encouraged to bring a sample survey for peer review and discussion.
Presenters: Ashley B. Akenson, Andrea Arce-Trigatti
As evaluators, we are called to evaluate projects of many types in multiple contexts and have a responsibility to support the quality and sustainability of these projects through our work. We may often serve on projects in unfamiliar contexts or work with individuals or organizations who believe differently than we may. The current insular and divisive US socio-political contexts highlight the need for evaluation practices that are responsive, inclusive, respectful, and compassionate if we are to heed evaluation’s call to contribute “to the welfare of humankind, and, more generally, to the welfare of the planet we inhabit” (Scriven. 2004, p. 183). In response to the growing dis-ease and division, we have applied mindfulness, compassion, empathy, awareness, and elements of design thinking to create the Wide Open Knowledge Environments (W.O.K.E.) framework and facilitate inclusive, responsive relationships to serve high quality contextually-driven evaluation grounded in integrity and care. This notion builds on AEA’s guiding principles of Respect for People and Common Good and Equity, as well as the AEA Competencies related to the Context and Interpersonal domains.
As part of our evaluation practice, we seek to engage and include all stakeholders, employ deep listening, and speak with rather than at. Participants will acquire practical knowledge related to the application of W.O.K.E. and the associated evaluative strategies, and also analyze the context and dynamics of their own evaluation projects using the W.O.K.E. framework as a method of analysis and a guide for ethical evaluation. Through this workshop, participants will learn to 1) define and apply W.O.K.E. practices in evaluation; 2) create more inclusive, responsive, and contextually-driven ways of enacting evaluation; 3) build more sustainable, respectful partnerships with stakeholders; 4) and apply reflexive practices to their evaluation work. They will also understand how this framework connects to the AEA guiding principles and competencies highlighted above. Note: In recognition of those who contributed to the body of woke practices and theories (e.g., Babluksi, 2020; Bunyasi & Watts Smith, 2019; Doyle, 2018; Harvor, 2019, Roy, 2018; Williams, 2020), we adopt and adapt the term to engage the participants in mindful, inclusive, transformative practices that underpin our conceptualization and embodiment of W.O.K.E. as applied to evaluation.
Presenters: R. Lillianne Macias, Krista Grajo, Rasheeda O'Connor, Jordyn Beshel, MA, Julia LeFrancois, Nancy Nava
The COVID-19 pandemic and racial reckoning of the past several years has changed the landscape of evaluation practice with marginalized communities. Evaluation methods and capacity-building moved to an entirely virtual environment, creating unique problems and solutions to technological and social barriers to engagement. At the community-level, catastrophic losses or symbolic losses, also referred to as collective trauma (Macias, 2021), bring attention to pre-existing and ongoing disparities in economic, mental, and physical health. All these stressors contribute to evaluation context, revealing unique barriers. Even with language, a surface level aspect of cultural adaptation (communication) may be obstructed in the online environment with the aid of online interpreters. Research also shows that both youth and adults, particularly from BIPOC communities, may experience screen fatigue and need greater accommodations to facilitate understanding and engagement online. The challenges of recent years have also revealed community strengths, such as collective calls to action and program adaptation to online environments. Leveraging storytelling methods through technology to build capacity, assess community needs, evaluate online programs, and disseminate findings can support the cultural relevance of evaluation practice. In this workshop, case examples will be shared to illustrate adaptations in the online environment and include developing video consents, online journaling tools, a summary of best practice for group video chats, and novel tools for study dissemination such as online, video-game style galleries and infographics.
Presenters: Brittany Dernberger, Ph.D., Caitlin Shannon, J.D., M.P.H., David Leege, Ph.D.
The international development sector has long been advancing systems thinking and complex programs designed to influence and strengthen systems for better development outcomes. Organizations like CARE continue to evolve their programming and business models, moving from a focus on direct implementation and impact to influencing systems change through advocacy, social norms change, systems strengthening/social accountability social movements and market-based approaches. The NGO role shifts to one of systems orchestration or facilitation with less direct impact on individuals and communities. As a result, CARE’s impact occurs more at the systems level, through sustainable scaling of programs and solutions by other actors.
This paradigm shift – in how programs function and organizations operate – requires MEAL practitioners to adapt the way we design and implement MEAL systems, the metrics we apply, and the data sources we use. This workshop is designed to share CARE’s approach and experience and to provide guidance on how to design and capacitate MEAL systems in this evolving program environment. Specifically, the workshop sessions will focus on (1) understanding, measuring, and learning about systems-change and impact, (2) understanding, measuring, and learning about the indirect or catalytic impact of programs, and (3) fostering organizational change management to strengthen capacity and systems needed to support this mindset shift in MEAL systems.
Presenters: Jasmine Williams-Washington, Ph.D., Amber Trout, Ph.D.
Organizations are being called, more than ever, to respond to the elevated tensions and increased awareness of structural racism. Resistance often emerges in these organizational change efforts. Staff feel uncertain, uncomfortable, and unsafe. Equity-center evaluation is not exempt from these feelings. Evaluators can use scenario planning to identify necessary conversations needed to “unfreeze” resistant staff or clients to see the changes that are possible, building capacity to relentlessly pursue equity. Evaluators have an opportunity to develop an equity lens with their team and bring an equity perspective to their clients.
Presenter: Donna M. Mertens
Evaluators who choose to work toward transformations that increase social, economic, and environmental justice will learn to apply a transformative framework to their evaluations that incorporates the use of mixed methods. This workshop is based on the premise that needed societal transformations can be supported through the use of transformative mixed methods approaches in evaluation. Examples will illustrate the use of these methods in multiple sectors when addressing systemic discrimination based on race, gender, disability, and sexual identity. Participants will apply the transformative framework to a case study tailored to their interests.
Presenters: Ralph Renger, PhD, MEP, Jessica Renger, MA
In recent years, there has been increased attention given to the necessity of integrating systems thinking into evaluation practice as well as increased demand for evaluators who understand systems thinking and can respond to the complex and dynamic environment in which many programs are situated. However, the number of evaluators who can translate systems thinking principles into real world practice is in short supply. One reason for this is that much of the system thinking research and theory is abstract, presenting a barrier to those wishing to apply systems thinking principles. This workshop will introduce participants to System Evaluation Theory as a framework through which key systems thinking principles can be simplified and applied to meets the demands of complex evaluations (Renger, 2015; Renger, 2016; Renger et al., 2017; Renger et al., 2019; Renger et al., in press; Renger et al., 2020). After this workshop, participants will be able to recognize and respond to systems issues in evaluation practice, understand how to adapt their existing program evaluation knowledge to begin evaluating systems, and have access to a practical systems thinking toolkit they can draw upon to feel more confident and capable in their future evaluation endeavors.
Offered: Monday, June 6, 1:30 p.m. – 4:30 p.m.; Wednesday, June 8, 9:00 a.m. – 12:00 p.m.
Presenters: Yueyue Fan, Meng Fan, Dandan Chen, Huan Liu
To respond to the needs of using data to develop strategic plans, measure performance, and derive insights, interactive dashboards have become a vital tool. Like traditional data reporting tools, dashboards allow people to review data summary, identify trends, and guide decision-making processes. Additionally, dashboards afford unique capacities that traditional static reporting lacks, such as flexibility in disaggregation and timeliness in delivery. Dashboard has gradually become a critical tool in communicating with different stakeholders and keeping transparency of data reporting. This workshop provides an overview on the process of dashboard designs and demonstrates how to use Excel to create an effective dashboard for data reporting.
Presenter: Amy Griffin
Evaluators wear many hats. A critical role is that of the facilitator. Facilitation is needed at every phase of the evaluation process including planning meetings, participatory logic model development, development of data collection methods, qualitative data collection (e.g. focus groups), and results dissemination.
Quality facilitation builds trust among the evaluator and community partners/clients. Convening effective and productive meetings sends a message of respect and can create a positive and energetic working atmosphere. These skills are essential but are rarely taught in school.
During this workshop, the presenter will discuss how to prepare for various types of facilitation experiences and will provide considerations for selecting facilitation modes (in-person versus virtual) and methods (using technology, flip charts, voting, group decision making) depending on the facilitation goals and group dynamics. Participants will also engage in a self-assessment process of their facilitation skills. This is an interactive session. Participants will be provided the opportunity to plan and present.
Offered: Monday, June 6, 9:00 a.m. – 12:00 p.m. NOTE: The Monday time slot is sold out.; Tuesday, June 7, 9:00 a.m. – 12:00 p.m.
Presenters: Carter Garber, Ph.D., David Pritchard
The workshop gives an introduction to the topics and the challenges evaluators face as they are asked to evaluate social enterprises, public and non-profit programs that seek triple bottom line outputs (economic, social, and environmental). Participants share information about their level of experience with the topics and what they seek to learn. Facilitators illustrate their ample experience through North American and international examples. Participants learn about a variety of both evaluation methods and tools that can be used.
Participants examine the multiple roles of social impact evaluation users: government entities (like the CDC), foundations, grantees, social enterprises, investors, investment managers, etc. They then use outputs to identify specific evaluation methods and tools they think would be better suited to meet the evaluation challenges for each type of user.
In groups, participants review the abstracts of evaluation reports to see case studies of methods and tools that are used in different situations. Participants critique whether appropriate evaluation tools were used and what other methods would add to the richness of the reports. The facilitators will identify “teaching moments” to add their own observations based on their experiences.
The workshop provides the participants the opportunities to share the lessons they have learned as well as what else they would like to learn in the impact evaluation field. The facilitators provide participants a glossary, resources, a bibliography, and further training opportunities. Facilitators have found that most of the tools and methods are new to the majority of Summer Institute participants.
Materials will be sent in an online box for the participants who register in advance and as follow-up to those who attended the workshop. These include a Glossary, Guide to Reading, Bibliography and additional resources and references to methods and tools. This greatly reduces the handouts that needed to be photocopied to only those needed for the three classroom exercises.
Presenters: Susan M. Wolfe, Ann W. Price
This workshop will provide evaluators who engage with community-based organizations and community members with insights, skills, and strategies for effective community engagement. Presenters will explore topics such as personal qualities, values, and knowledge, and the skills needed to be effective working with communities. Presenters will use participative techniques, such as small group discussions, exercises, and sharing to make the workshop interactive and fun. Participants will learn how they can develop the skills needed to work successfully with community members and community-based organizations while navigating resource limitations, community politics, and other challenges. They will learn to build successful collaborations with communities for participatory evaluations. The presenters provide participants with practical tips and resources.
Offered: Tuesday, June 7, 9:00 a.m. – 12:00 p.m.; Tuesday, June 7, 1:30 p.m. – 4:30 p.m.
Presenters: Dr. Janelle Coleman, Dr. Amanda Assalone
According to Goal 3 of the University of Tennessee Knoxville’s (UTK) Strategic Vision, UTK aspires to “develop and sustain a nurturing university culture where diversity and community are enduring sources of strength, and to nurture change that supports inclusive behaviors and a culture of respectful dialogue to create greater understanding of difference, starting with our administration, faculty, and staff” (UTK Strategic Vision, 2021). A key factor in meeting this goal is cultivating inclusive leadership practices that foster a sense of belonging among members of the campus community. Inclusive leadership not only involves engaging in critical self-reflection, but also utilizing what they have learned from their reflections and data from stakeholders to make decisions that enhance and support the well being, sense of belonging, and safety of staff, students, and faculty. In this interactive workshop, participants will reflect on their own organizations, and learn how to interpret and draw conclusions from data, and how to utilize the information gained to make large and small-scale changes that can positively impact organizational culture and climate. Additionally, participants will gain insight on how to effectively plan for and drive positive change despite limited resources.
Presenters: Julia Jordan, Cagney Stigger Morns
Evaluation resources are always scarce and deciding where to invest in evaluation implementation is not an easy task. Implementing systematic screening and assessments (SSA) can be used as a strategy to help make those decisions. Rather than risk the entire evaluation budget on an assumption that a program is being implemented with fidelity and is ready for evaluation, implementing a system that includes expert panel review and evaluability assessments (EA) to assess and sift through programs may help in the search for evidence of evaluation readiness. This workshop will present strategies for discerning when to invest in rigorous evaluation for evidence building. The workshop will discuss the practical implementation of SSAs and EAs to mitigate investment risk in the search for building evidence for programs operating in the field and how adaptations of these methods have been used to identify readiness for evaluation when met with limited time and/or resources. Specifically, this workshop will discuss adaptations made to the SSA/EA process that allow for the rapid identification of programs during the COVID-19 pandemic. Participants will have the opportunity to apply many of the steps through case examples.
Presenters: Vivian Agbegha, Jessica Li
Designing a project effectively and inclusively from the start helps to improve programming in a variety of ways, from more intimate knowledge of the context to faster mobilization when responding to unexpected shocks, to increased engagement with and feedback from diverse stakeholders. Social equity – justice or fairness in social outcomes – is becoming more and more of a priority for evaluators as well as program implementers. One way to ensure more useful and equitable technical design is to tailor interventions specifically to the local context by identifying and addressing the root causes of problems that hinder the achievement of desired objectives, while ensuring that all stakeholder groups, including those that have traditionally experienced inequities, have been consulted in the process. Root cause analysis (RCA), which precedes theory of change (TOC) development, is a methodical exercise to identify underlying causes to major problems in order to develop appropriate interventions that will lead to lasting change. RCAs can take several forms, but they share the common purpose of inclusive, evidence-based problem identification to improve outcomes. This workshop will highlight participatory methods for gathering the data that will inform RCA development, which takes into account equity priorities by involving the entire range of stakeholders invested in the problem and grounding RCA in deep local context.
The workshop will focus on the problem tree analysis and take a deep dive into this RCA approach so that participants can leave with a strong understanding of one tool. However, the material presented will cover other RCA methods in brief, such as Bottleneck Analysis, the Fishbone Diagram, and The Five Whys. In the three-hour workshop, experienced facilitators will deliver specific guidance on how to develop a problem tree to inform enhanced programming grounded in evidence and local context, and participants will leave with a better understanding of when, how, and why RCA tools can be used to develop and enhance program interventions. The facilitators will lead participants through sample exercises and simulations to outline the steps that should be taken to prepare for RCAs, recognizing all eight principles of adult learning and structuring the time in such a manner as to encourage shared dialogue, mentorship, practicality, and a focus on relevance and translation of knowledge into personal goals. These exercises will include translating the research into theory of change development as well as how to develop equity-focused solutions to the problems identified.
Presenters: Stephanie Frost, Shannon Hitchcock, Shelly Spoeth
This workshop is designed to provide information, insights, and skills to help evaluators strategically think about, implement, and assess dissemination activities for a broad range of projects or programs. In this three-hour workshop, presenters will provide an overview of key dissemination-related frameworks, as well as examples of the frameworks in use. Then they will provide hands on tools that can be used to plan for, implement, and evaluate dissemination activities. Presenters will use brief periods of lecture-focused learning combined with group-based, hands-on activities to ensure attendees have the opportunity to actively apply session content. During the workshop, participants will develop at least one or more of the following for their own program or project: a dissemination-focused logic model, an audience assessment tool, a partner engagement tracker, and/or a dissemination evaluation plan. The content of this workshop will be useful for applied researchers, grant makers, and program officers including those who are new to evaluation and those with evaluation-related experience.
Attendees will learn to:
Presenter: Brittany R. Pope
Evaluation professionals would agree that their role is never simply limited to crunching numbers and completing reports for programs and funders. This workshop is for evaluation professionals – whether operating formally in research, quality improvement, or program evaluation roles – looking to expand their impact across the life cycle of programs and projects at their organizations, while also responding to the call for increased equity-focused initiatives that improve the lives of individuals and communities.
This workshop will use case studies and hands-on activities to introduce community-based behavioral health innovations that increase health equity for children and parents, mothers experiencing depression, and fathers during the pre and postnatal periods. Each of these innovations grew from practice-informed quality improvement and research initiatives that elevated collaboration across key stakeholders and subject matter experts both internal and external to the mid-sized Ohio community-based organization. In other words, the program evaluation reports inspired attention grabbing, and research-based innovations that have been identified as evidence-based and cutting-edge programs and services that promote health and social justice. Highlights will be shared how evaluation professionals became better integrated across the life cycle of projects and programs; no longer relegated to only analyzing and reporting on data, but instead informing the design of project logic models, data collection methods, and evaluation plans at the start and throughout projects.
This workshop will prompt attendees to reflect upon their own professional identity as a driver of equity for families and communities, and introduce fast cycle iteration as an actionable strategy to elevate cultural responsiveness as an equity-driving tactic. Attendees will leave this transformative learning workshop invigorated and ready with draft procedures, proposals and memos to carry with them and implement upon return to the office. This workshop will be thought and emotion provoking, but most importantly serve as a springboard for attendees to respect and honor the people served as more than numbers on a spreadsheet, and advocate for their inclusion in their organizations as more than the data people.
This workshop will be attractive to learners with a varied level of experience as the facilitator is experienced navigating diverse groups, and making space for everyone to feel comfortable and confident in their ability to both learn and contribute during workshops. A variety of learning strategies and materials will be employed to honor different ways of knowing, effective adult learning and development principles, and inclusion.
How to initiate practice-informed quality improvement and research designs that promote equity in behavioral health community-based settings
How to engage as an evaluator across the life cycle of community-based family serving projects/programs
How to support programmatic inclusion of cultural responsiveness and fast cycle iteration elements into evaluation activities for community-based programs
Offered: Tuesday, June 7, 1:30 p.m. – 4:30 p.m.; Wednesday, June 8, 9:00 a.m. – 12:00 p.m.
Presenter: Martha A. Brown
Trauma-informed evaluation is a lens through which evaluators see their clients, programs, organizations, work, and lives. This lens is informed by SAMHSA’s Six Principles of Trauma-Informed Care: safety, peer support; collaboration and mutuality; trust and transparency; cultural, historic and gender issues; and empowerment, voice and choice. This hands-on workshop will provide evaluators with an understanding of individual and collective trauma, how trauma manifests and impacts individuals and communities, and how to apply the Principles of TI Care to their work, regardless of evaluation method or approach. Attendees will learn how to identify various trauma-related behaviors and responses; how to avoid re-traumatization; how to regulate self and others when triggered or when experiencing toxic stress; and how to heal secondary trauma evaluators may experience when working with traumatized populations. Activities will include small group brainstorming sessions on how to apply the Principles of TI Care to specific methods, approaches, and contexts.
Realize the widespread impacts of trauma and recognize the signs and symptoms of trauma in self, clients, participants, and communities
Respond by fulling integrating knowledge of trauma and resiliency into program and evaluation design
Employ the Principles of Trauma-Informed Care to avoid retraumatizing self and others
Offered: Monday, June 6, 1:30 p.m. – 4:30 p.m.; Tuesday, June 7, 1:30 p.m. – 4:30 p.m.
Presenter: Elizabeth DiLuzio
Participatory Design is the process of co-designing effective approaches and solutions with stakeholders to a variety of multi-disciplinary contexts. It is seen as an important approach to the design of sustainable and responsive programs, products, and business models. This workshop is a hands-on opportunity to explore structures that enable Participatory Design to be further incorporated into the field of Evaluation.
In this workshop, the presenter will introduce participatory methods and tools for face-to-face and remote meetings at every stage of the evaluation process. Throughout the workshop, the group will reflect on how to use and adapt these methods according to different objectives, contexts and phases. This workshop includes a step-by-step guide to hosting a wide variety of participatory structures.
Be inspired with new ideas about how to incorporate stakeholder voice in every stage of the evaluation process
Acquire a basic understanding of Participatory Evaluation, its philosophy and principles
Be familiar with participatory approaches, methods, tools and skills