SUMMER EVALUATION INSTITUTE 2016

Workshop and Course Descriptions

Workshop 1: Introduction to Evaluation

Offered: Sunday, June 26, 2016 from 9:00am-4:00pm

Speaker: Jan Jernigan

Level: Advanced Beginner

This workshop will provide an overview of program evaluation for Institute participants with some, but not extensive, prior background in program evaluation. The foundations of this workshop will be organized around the Centers for Disease Control and Prevention’s (CDC) six-step Framework for Program Evaluation in Public Health as well as the four sets of evaluation standards from the Joint Commission on Evaluation Standards. The six steps constitute a comprehensive approach to evaluation. While its origins are in the public health sector, the Framework approach can guide any evaluation. The workshop will place particular emphasis on the early steps, including identification and engagement of stakeholders, creation of logic models, and selecting/focusing evaluation questions. Through case studies, participants will have the opportunity to apply the content and work through some of the trade-offs and challenges inherent in program evaluation in public health and human services.

You will learn:

  • A six step framework for program evaluation
  • How to identify stakeholders, build a logic model, and select evaluation questions
  • The basics of evaluation planning

Audience: Attendees with some background in evaluation, but who desire an overview and an opportunity to examine challenges and approaches. Cases will be from public health but general enough to yield information applicable to any other setting or sector.

Jan Jernigan, PhD, is a Behavioral Scientist and Senior Advisor for Evaluation in the Division of Nutrition, Physical Activity and Obesity (DNPAO), Centers for Disease Control and Prevention (CDC). She is involved in applied evaluation research in the Division and serves as a technical expert, providing training and technical assistance to funded state and local health departments and their partners in conducting evaluations of their initiatives.  Currently, she co-leads the national evaluation of a chronic disease funding opportunity announcement (FOA) that provides $75 million annually to fund all 50 states and the District of Columbia to address chronic diseases and their risk factors and serves as a senior evaluation advisor for four additional FOAs.  Dr. Jernigan leads research efforts to examine communities with declines in childhood obesity, improve physical activity and nutrition for the military as part of the DOD Healthy Base initiative, and develop new evaluation guidance for USDA funding for SNAP-Ed.

Workshop 2: A Primer on Evaluation Theories and Approaches

Offered: Sunday, June 26, 2016 from 9:00am-4:00pm

Speaker: Daniela Schroeter 

Level: Beginner

This workshop presents an introduction to historical and contemporary theories and approaches to evaluation in interdisciplinary contexts. The primary focus is on key evaluation terminology and classifications of theories and approaches recommended by evaluation thought leaders. Workshop participants will gain insight into how their own backgrounds, training, and contexts may influence their choice of or preference for particular approaches. Incorporating small group activities, case studies, and discussions, this workshop will allow for critical reflection and active engagement with key content so that participants will leave the workshop with a solid understanding about existing theories and approaches, and their strengths, weaknesses, and opportunities for application in practice.

You will learn:

  • To recognize different evaluation theories and approaches
  • To identify strengths, weaknesses, and opportunities associated with various evaluation theories and approaches in differing contexts
  • To apply different theories and approaches in evaluation practice

Audience: Evaluation practitioners of all levels in all sectors.

Daniela Schroeter is an assistant professor in Western Michigan University’s (WMU) School of Public Affairs and Administration, an associate faculty member of the Interdisciplinary Ph.D. in Evaluation program, and a principal investigator in the Evaluation Center (EC). Prior to joining WMU’s faculty, she was the Director of Research in WMU’s EC where she served as the project director on numerous, interdisciplinary evaluation grants and contracts. Schroeter is an experienced workshop facilitator who specializes in evaluation theory, methodology, practice, and capacity building. She currently teaches courses in program evaluation, grant writing, analytical methods, and nonprofit governance. She was the recipient of the American Evaluation Association’s 2013 Marcia Guttentag Award.

Workshop 3: Development and Use of Indicators for Program Evaluation

Offered: Monday, June 27, 2016 from 9:10am-12:30pm

Speaker: Goldie MacDonald

Level: Beginner to Intermediate

Description: 

The selection of indicators for use in program evaluation can be complex and time-consuming. Moreover, stakeholders who are expected to participate in this work may come to the discussion with varying levels of knowledge relevant to the program and its evaluation. In this workshop, participants will learn to identify and select good indicators and consider how to fully engage stakeholders in the dialogue. Topics include criteria for selection of indicators, as well as key considerations relevant to planning indicators-based evaluations in domestic and international settings.

You will learn:

  • To explain the necessary alignment of indicators to other critical elements of evaluation design (e.g., purpose of the evaluation, evaluation questions) and consequences of misalignment.
  • To explore and construct both process and outcome indicators.
  • To explain the importance of basic literature searches to indicator development and use.
  • To review examples of operational definitions that should accompany indicators to be used in an evaluation.
  • To identify criteria for selection of high-performing indicators.
  • To recognize common mistakes or practice traps in the development and use of indicators and strategies to avoid them.

Audience: Evaluation practitioners who have some experience in the field and who would like to become familiar with cost-effectiveness and other forms of economic evaluation. Participants do not need to have any experience in cost-effectiveness analysis for this session.

Goldie MacDonald, PhD is a Health Scientist in Center for Global Health (CGH) at the U.S. Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia.  Since joining the CDC in 1999, she co-led design and implementation of a multisite evaluation of the Field Epidemiology Training Program (FETP) in 10 countries; design and implementation of The National Inventory of Core Capabilities for Pandemic Influenza Preparedness and Response to evaluate flu preparedness in 36 countries; and design and implementation of an evaluation of the Steps to a HealthierUS Cooperative Agreement Program in 42 communities in the U.S.  She is the lead author of Introduction to Program Evaluation for Comprehensive Tobacco Control Programs; the authors received the Alva and Gunnar Myrdal Award for Government from the AEA in November 2002.  Currently, she co-leads design and implementation of a multisite evaluation of Integrated Disease Surveillance and Response (IDSR) in Africa.

Workshop 4: Twelve Steps of Quantitative Data Cleaning: Strategies for Dealing with Dirty Evaluation Data

Offered: Monday, June 27, 2016 from 1:40pm-5:00pm; Tuesday, June 28, 2016 from 9:10am-12:30pm

Speaker: Jennifer Morrow

Level: Intermediate

Evaluation data, like a lot of research data, can be messy. Rarely are evaluators given data that is ready to be analyzed. Missing data, coding mistakes, and outliers are just some of the problems that evaluators should address prior to conducting analyses for their evaluation report. Even though data cleaning is an important step to data analysis, the topic has received little attention in the literature, and the resources that are available in the literature tend to be complex and not always user friendly. In this workshop, you will go step-by-step through the data cleaning process and learn suggestions for what to do at each step.

You will learn:

  • The recommended 12 steps are for cleaning dirty evaluation data
  • Suggestions for ways to deal with messy data at each step
  • Methods for reviewing analysis outputs and making decisions regarding data cleaning options

Audience: Novice and experienced evaluators

Jennifer Morrow is an Associate Professor in Evaluation, Statistics and Measurement at the University of Tennessee with more than 16 years of experience teaching statistics at the undergraduate and graduate level. She is currently working on a book about the 12-steps of data cleaning.

Workshop 5: It's not the Plan, It's the Planning: Strategies for Evaluation Plans and Planning

Offered: Monday, June 27, 2016 from 9:10am-12:30pm; Monday, June 27, 2016 from 1:40pm-5:00pm

Speaker: Sheila Robinson

Level: Beginner, Advanced Beginner

"If you don’t know where you’re going, you’ll end up somewhere else" (Yogi Berra). Few evaluation texts explicitly address the act of evaluation planning as independent from evaluation design or evaluation reporting. This interactive session will introduce you to an array of evaluation activities that comprise evaluation planning and preparing a comprehensive evaluation plan. You will leave this workshop with an understanding of how to identify stakeholders and primary intended users of evaluation, the extent to which they need to understand and be able to describe the evaluand (the program), strategies for conducting literature reviews, strategies for developing broad evaluation questions, considerations for evaluation designs, and using the Program Evaluation Standards and AEA’s Guiding Principles for Evaluators in evaluation planning.

You will be introduced to a broad range of evaluation planning resources including templates, books, articles, and websites.

You will learn:

  • Types of evaluation activities that comprise evaluation planning
  • Potential components of a comprehensive evaluation plan
  • Considerations for evaluation planning (i.e. client needs, collaboration, procedures, agreements, etc.)

Audience: Evaluation practitioners with some background in evaluation basics.

Sheila Robinson is a Program Evaluator and Instructional Mentor for Greece Central School District, and Adjunct Professor at the University of Rochester’s Warner School of Education. Her background is in special education and professional development, and she is a certified Program Evaluator. Her work for the school district centers on professional development, equity and culturally responsive education, and evaluation. At Warner School, she teaches graduate courses in Program Evaluation and Designing and Evaluating Professional Development. She is Lead Curator of AEA365 Tip-A-Day By and For Evaluators, Coordinator of the Potent Presentations (p2i) Initiative, and is a past Program Chair of AEA's PK-12 Educational Evaluation TIG.

Workshop 6: Evaluating Community Coalitions and Partnerships: Methods, Approaches, and Challenges

Offered: Tuesday, June 28, 2016 from 1:40pm-5:00pm; Wednesday, June 29, 2016 from 9:10am-12:30pm

Speaker: Frances Butterfoss

Level: Advanced Beginner

Coalitions involve multiple sectors of the community and implement strategies that focus on policy, systems, and environmental change. The pooling of resources, mobilization of talents, and diverse approaches inherent in a community coalition make it a logical approach for promoting health and preventing disease. A coalition or partnership must evaluate its infrastructure, function, and processes; strategies for achieving its goals; and changes in health/social status or the community.

You will learn:

  • To develop a comprehensive evaluation strategy based on coalition theory
  • To select appropriate short, intermediate and long-term indicators to measure outcomes
  • To choose appropriate methods and tools
  • To use evaluation results to provide accountability to stakeholders and improve coalition

Audience: Attendees working in communities with a general knowledge of evaluation terminology and quantitative and qualitative data collection methods.

Frances Dunn Butterfoss is a health educator and President of Coalitions Work, a consulting group that helps communities develop, sustain and evaluate health promotion/disease prevention coalitions. She is an adjunct professor at Eastern Virginia Medical School and Old Dominion University and teaches in their MPH program. Butterfoss is the founder of the Consortium for Infant and Child Health (CINCH) and Project Immunize Virginia (PIV). She evaluated Virginia’s Healthy Start coalitions to prevent infant mortality, directed the National Coalition Training Institute, and was funded by the Robert Wood Johnson Foundation for community asthma and health insurance initiatives. She served as Deputy Editor of Health Promotion Practice, and her text, Coalitions and Partnerships in Community Health, is valued by practitioners and academics alike.

Workshop 7: Logic Models for Program Evaluation and Planning

Offered: Wednesday, June 29, 2016 from 9:10am-12:30pm

Speaker: Thomas Chapel

Level: Advanced Beginner

The logic model, as a map of what a program is and intends to do, is a useful tool in both evaluation and planning and, as importantly, for integrating evaluation plans and strategic plans.  In this session, we will recapture the utility of program logic modeling as a simple discipline, using cases in public health and human services to explore the steps for constructing, refining, and validating models. Then, we will examine how to use these models both prospectively for planning and implementation as well as retrospectively for performance measurement and evaluation.  This webinar will illustrate the value of simple and more elaborate logic models using small group case studies.

You will learn:

  • To construct simple logic models
  • To use program theory principles to improve a logic model
  • To employ a model to identify and address planning and implementation issues

Thomas Chapel is the Chief Evaluation Officer at the Centers for Disease Control and Prevention. He serves as a central resource on strategic planning and program evaluation for CDC programs and their partners. Before joining CDC, Chapel was Vice-President of the Atlanta office of Macro International (now ICT INC) where he directed and managed projects in program evaluation, strategic planning, and evaluation design for public and nonprofit organizations. He is a frequent presenter at national meetings, a frequent contributor to edited volumes and monographs on evaluation, and has facilitated or served on numerous expert panels on public health and evaluation topics.

Workshop 8: Basics of Program Design: A Theory-Driven Approach

Offered: Tuesday, June 28, 2016 from 9:10am-12:30pm

Level: Beginner

Speakers: Stewart Donaldson; John Gargani

Evaluators often take an active role in program design, and understanding the basics of program design from a theory-driven evaluation perspective can be essential. In this workshop, you will learn the five elements of a basic program design and how they relate to program theory and social science research. A strong program design is an important element in evaluation design. Begin to develop your skill in putting together the pieces of a program with the potential to improve social, health, educational, organizational, and other issues. Mini lectures interspersed with small group activities will help you apply and understand the concepts presented. Examples from evaluation practice will illustrate main points and key take-home messages, and you will receive a handout of further resources.

You will learn:

  • To develop a basic program design from a theory-driven evaluation perspective
  • To use social science theory to design and improve social, health, educational, and organizational programs
  • To identify roles evaluators can play in designing programs
  • To understand the professional challenges and ethical issues that may confront evaluators when they participate in the program design process

Stewart Donaldson is the Immediate Past-President of AEA, Professor & Dean of the School of Social Science, Policy & Evaluation and the School of Community and Global Health, and Director of the Claremont Evaluation Center at Claremont Graduate University.  He has been published widely on the topic of applying evaluation and program theory, developed one of the largest university-based evaluation training programs, and has conducted developmental, formative, and summative evaluations for more than 100 organizations during the past two decades.

John Gargani is the 2016 President of AEA. He is also President of Gargani + Company, Inc., a consulting company in Berkeley, California that has been helping organizations achieve their social missions for over 20 years. He and his staff at Gargani + Company, Inc. work with organizations of every type to design innovative solutions to social and environmental problems, and to evaluate their effectiveness and efficiency in rigorous ways.  He holds three graduate degrees—a Ph.D. in Education from the University of California, Berkeley, where he studied measurement and evaluation; an M.S. in Statistics from New York University’s Stern School of Business; and an M.B.A. from the Wharton School of the University of Pennsylvania.

Workshop 9: Introduction to Communities of Practice (CoPs)

Offered: Monday, June 27, 2016 from 9:10am-12:30pm; Tuesday, June 28, 2016 from 9:10am-12:30pm

Speakers: Leah Neubauer; Thomas Archibald

This interactive workshop will introduce Communities of Practice (CoPs) and its application for evaluators and evaluation. CoPs are designed to engage learners in a process of knowledge constructed around common interests, ideas, passions, and goals—the things that matter to the people in the group. Through identifying the three core CoP elements (domain, community and practice), members work to generate a shared repertoire of knowledge and resources. CoPs can be found in many arenas: corporations, schools, non-profit settings, within evaluation designs, and in local AEA affiliate practice. This workshop will explore CoP development and implementation for a group of evaluators focused on understanding experience, increasing knowledge, and ultimately, improving evaluation practice. Session facilitators will also highlight examples from the fields of evaluation, public health and adult education and involve participants in a series of hands-on inquiry-oriented techniques.

You will learn:

  • Key theories and models guiding Communities of Practice
  • The ten essential fundamentals of developing and sustaining a Community of Practice
  •  CoP methodologies including: storytelling, arts-based, collaborative inquiry, evaluative thinking, and critical self-reflection

Audience:  Evaluation (Specify beginner, intermediate, Advanced) 

Dr. Leah Christina Neubauer has been working in the field of public health as an educator, evaluator, and researcher for the last fifteen years.  Her research focuses on health education and promotion, global health & health disparities. She leads and collaborates on projects that employ mixed-method approaches to develop, implement, evaluate & disseminate translational and culturally responsive research and evaluation. Leah has collaborated with many global (Kenya-based), national, state and local partners on a variety of endeavors.  She has delivered numerous presentations and co-authored publications on global public health and community-based evaluation, training and research. Her dissertation, The Critically Reflective Evaluator, identified essential qualities and characteristics of evaluator-formed CoPs.  She is currently the Past-President of the AEA Affiliate – the Chicagoland Evaluation Association and co-chair of the AEA Local Affiliate Collaborative (LAC). She is an Assistant Professor of Preventive Medicine at Northwestern University. She received her EdD in Adult and Continuing Education in 2013 from National Louis University.

Dr. Thomas Archibald is an Assistant Professor and Extension Specialist in the Department of Agricultural, Leadership, and Community Education at Virginia Tech. His research and practice focus on program evaluation, evaluation capacity building (especially regarding the emergent notion of “evaluative thinking”), and research-practice integration, focusing specifically on contexts of Cooperative Extension and community education. He has facilitated numerous capacity building workshops around the United States and in sub-Saharan Africa. Archibald is a recipient of the 2013 Michael Scriven Dissertation Award for Outstanding Contribution to Evaluation Theory, Method, or Practice for his dissertation on the politics of evidence in the “evidence-based” education movement. He is a Board Member of the Eastern Evaluation Research Society and a Program Co-Chair of the AEA Organizational Learning and Evaluation Capacity Building Topical Interest Group. He received his PhD in Adult and Extension Education in 2013 from Cornell University, where he was a graduate research assistant in the Cornell Office for Research on Evaluation under the direction of Bill Trochim.

Workshop 10: Advanced Cost-Effectiveness Analysis for Health and Human Service Programs

Offered: Tuesday, June 28, 2016 from 1:40pm-5:00pm; Wednesday, June 29, 2016 from 9:10am-12:30pm

Speaker: Edward Broughton

Level:  Advanced

The relentless drive for health and human service programs to be more efficient and affordable demands robust economic analyses so policymakers can make informed decisions. This interactive workshop builds on basic knowledge and skills in cost-effectiveness analysis (CEA) to help you understand the workings of more realistic economic models that take into account uncertain data and changing circumstances. Learn what sensitivity analysis is, what a Markov model looks like and how to use probability modeling. By the end of the session, you will be able to conduct your own basic cost-effectiveness analysis and interpret and communicate its results. You will also understand more complex economic analyses of health and human service programs and possess the basic framework upon which you can develop further skills in this area. 

You will learn:

  • How to do a CEA that is relevant to the work you are involved in
  • How to develop a model that accounts for the uncertainty of the inputs to it
  • What a Markov model is used for and how it works
  • How to effectively interpret and communicate the results of a CEA

Audience: Evaluation practitioners who have some experience in the field and who would like to become familiar with cost-effectiveness and other forms of economic evaluation. Participants do not need to have any experience in cost-effectiveness analysis for this session.

Edward Broughton is Director of Research and Evaluation on the USAID ASSIST Project with University Research Co. He previously served as adjunct faculty at Mailman School of Public Health at Columbia University, teaching about economic analyses, health economics, research methods for health policy and management and decision analysis.

Workshop 11: Conflict Resolution Skills for Evaluators

Offered: Monday, June 27, 2016 from 9:10am-12:30pm; Tuesday, June 28, 2016 from 9:10am-12:30pm

Speaker: Jeanne Zimmer

Unacknowledged and unresolved conflict can challenge even the most skilled evaluators. Conflict between evaluators and clients and among stakeholders creates barriers to successful completion of the evaluation project. This workshop will delve into ways to improve listening, problem solving, communication, and facilitation skills and introduce a streamlined process of conflict resolution that may be used with clients and stakeholders.

Through a hands-on, experiential approach using real-life examples from program evaluation, you will become skilled at the practical applications of conflict resolution as they apply to situations in program evaluation. You will have the opportunity to assess your own approach to handling conflict and build on that assessment to improve your conflict resolution skills.

You will learn:

  • The nature of conflict in program evaluation and possible positive outcomes
  • How to incorporate the five styles of conflict-resolution as part of reflective practice
  • Approaches to resolving conflict among stakeholders with diverse backgrounds and experiences
  • Techniques for responding to anger and high emotion in conflict situations
  • To problem solve effectively, including win-win guidelines, clarifying, summarizing, and reframing

Jeanne Zimmer served as Executive Director of the Dispute Resolution Center since 2001 and is completing a doctorate in evaluation studies with a minor in conflict management at the University of Minnesota. For over a decade, she has been a very well-received professional trainer in conflict resolution and communications skills.

Workshop 12: Using Theory to Improve Evaluation Practice

Offered: Monday, June 27, 2016 from 1:40pm-5:00pm

Speaker: Stewart Donaldson

Level: Beginner

This workshop will provide evaluators with an opportunity to improve their understanding of how to use theory to improve evaluation practice. The workshop will examine social science theory and stakeholder theories, including theories of change and their application to making real improvements in how evaluations are framed and conducted. Lecture, exercises, and discussions will help participants learn how to apply evaluation theories, social science theories, and stakeholder theories of change to improve the accuracy and usefulness of evaluations. A wide range of examples from evaluation practice will be provided to illustrate main points and key take-home messages.

You will learn:

 

  • To define and describe evaluation theory, social science theory, and program theory
  • How evaluation theory can be used to improve evaluation practice
  • How implicit and explicit social science theories can be used to guide evaluation decisions
  • The components and processes of several commonly used social science theories that have been used to develop and evaluate interventions
  • How developing stakeholder theories of change can be used to improve evaluation practice
  • To describe and address the common challenges, professional, and ethical issues involved with evaluators designing or improving programs

Audience:  Evaluators working in any context.

Stewart Donaldson is Immediate Past-President of AEA, Professor & Dean of the School of Social Science, Policy & Evaluation and the School of Community and Global Health, and Director of the Claremont Evaluation Center at Claremont Graduate University.  He has been widely published on the topic of applying evaluation and program theory, developed one of the largest university-based evaluation training programs, and has conducted developmental, formative, and summative evaluations for more than 100 organizations during the past two decades.

Workshop 13: Beyond the Basics of Program Design: A Theory-Driven Approach

Offered: Tuesday, June 28, 2016 from 1:40pm-5:00pm

Speaker: John Gargani

Level: Intermediate

A strong program design is critical to the success of social, health, educational, organizational and other programs. Consequently, evaluators with strong design skills can improve a program’s chances of success and the quality of their evaluations by taking an active part in the design process.

Building on the popular beginner workshop “Basics of Program Design,” this hands-on workshop will help you take your program design skills to the next level. You will learn how to work more effectively with stakeholders in collaborative settings, and how that strategy can yield stronger, more useful evaluations. Through mini-lectures interspersed with small-group activities, learn to apply and understand the concepts presented.

You will learn:

  • How to develop, refine, and integrate all the elements of a program design from a theory-driven evaluation perspective
  • How to ensure that stakeholder values are embedded in the program
  • How to connect program activities with program purposes in a detailed, comprehensive way
  • How to use a program design to craft comprehensive monitoring and evaluation systems
  • How to identify roles evaluators can play in a collaborative design process;
  • How to describe and address the common challenges, professional and ethical issues involved with evaluators designing or improving programs

John Gargani is the 2016 President of AEA. He is also President of Gargani + Company, Inc., a consulting company in Berkeley, California that has been helping organizations achieve their social missions for over 20 years. He and his staff at Gargani + Company, Inc. work with organizations of every type to design innovative solutions to social and environmental problems, and to evaluate their effectiveness and efficiency in rigorous ways.  He holds three graduate degrees—a Ph.D. in Education from the University of California, Berkeley, where he studied measurement and evaluation; an M.S. in Statistics from New York University’s Stern School of Business; and an M.B.A. from the Wharton School of the University of Pennsylvania.

Workshop 14: Strategies for Interactive Evaluation Practice: An Evaluator's Dozen

Offered: Monday, June 27, 2016 from 9:10am-12:30pm; Monday, June 27, 2016 from 1:40pm-5:00pm

Speaker: Laurel Stevhan

Level: Intermediate

In its many forms, evaluation practice requires evaluators to be skilled facilitators of interpersonal interactions. Whether you are completely in charge, working collaboratively with program staff, or coaching individuals conducting their own study, you need to interact with people throughout the course of an evaluation. This workshop will provide practical frameworks and strategies for analyzing and extending your own practice. Through presentation, demonstration, discussion, reflection, and case study, you will consider and experience strategies to enhance involvement and foster positive interaction in your evaluation practice.

You will learn:

  • The three frameworks that underpin interactive evaluation practice (IEP)
  • Rationales for engaging clients/stakeholders in various evaluation tasks
  • Interactive strategies to facilitate meaningful involvement, including voicing variables, cooperative interviews, making metaphors, data dialogue, jigsaw, graffiti/carousel, cooperative rank order, and others
  • Strategic applications useful in your own evaluation context

Laurie Stevahn is a professor in the Educational Leadership doctoral program in the College of Education at Seattle University where she teaches research/evaluation, leadership, and social justice courses. She has over 30 years of experience as an educator, evaluator, and researcher specializing in creating conditions for cooperative interaction, constructive conflict resolution, participatory evaluation, and evaluation capacity building. Laurie, along with Jean A. King, a professor and director of Graduate Studies in the Department of Organizational Leadership, Policy, and Development at the University of Minnesota, are co-authors of Interactive Evaluation Practice: Mastering the Interpersonal Dynamics of Program Evaluation (Sage, 2013) and Needs Assessment Phase III: Taking Action for Change (Sage, 2010). 

Workshop 15: Data visualization using R: Are you ready to ggplot?

Offered: Tuesday, June 28, 2016 from 1:40pm-5:00pm; Wednesday, June 29, 2016 from 9:10am-12:30pm

Speaker: Tony Fujs

Level: Beginner

Independently of evaluations context and design, a constant for evaluators is the need to effectively communicate results to their audience, so these results can be used to create sustainable change. This is why data visualization is such an important tool for evaluators. In this skill-building session, attendees will learn how to use R and the grammar of graphics to produce powerful and scalable visualizations. R is known for its impressive data visualization capabilities. This skill-building workshop will show evaluators how to produce powerful visualizations with a minimum of resources.

You will learn:

  • The basics of the grammar of graphics, a powerful approach to think about data visualization 
  • How to create effective charts using R 
  • How to produce complex or multiple charts in a blink of an eye

In the past year, Tony Fujs facilitated professional development workshop at institutions such as the Urban Institute, Georgetown University, and the Eastern Evaluation Research Society (EERS), amongst others. He has taught this workshop before for the Latin American Youth Center evaluation team, as well as for the EERS 2015 conference.

As the Director of Learning and Evaluation at the Latin American Youth Center in DC, Fujs supports information-based decision making, and improves organizational performance. For the past five years, He has been using R to streamline data cleaning and data management, scale analytic, and improve visualizations of LAYC's data.

Workshop 16: Evaluating and Improving Organizational Collaboration

Offered: Monday, June 27, 2016 from 1:40pm-5:00pm; Tuesday, June 28, 2016 from 9:10am-12:30pm

Speakers: Rebecca Woodland

Level: Intermediate

"Collaboration" is a ubiquitous, yet misunderstood, under-empiricized, and un-operationalized construct. Program and organizational stakeholders looking to do and be collaborative struggle to identify, practice, and evaluate collaboration with efficacy. In this workshop we will explore how the principles of collaboration theory can be used to plan, evaluate, and improve strategic alliances and inter-professional collaboration. Participants will have the opportunity to increase their capacity to quantitatively and qualitatively examine the development of inter- and intra-organizational partnerships. Together, we will practice strategies for assessing levels of integration, stages of development, cycles of inquiry, and specific tools for data collection, analysis, and reporting. Using real world techniques from PreK-16 educational reform and the development of professional learning communities, grant and contract sponsored endeavors such as the Massachusetts' Board of Registered Nurses Patient Safety Initiative, CDC sponsored activities of the Association for State and Territorial Dental Directors (ASTDD), the federal Animal Plant and Health Inspection Service (APHIS), and EPsCoR/NSF grant-sponsored inter-jurisdictional research programs, we will recognize the five fundamental stages of evaluation of organizational collaboration embodied in the Collaboration Evaluation and Improvement Framework (Woodland & Hutton, 2012).

You will learn:

  • How to operationalize "collaboration" so as to understand and be able to evaluate the construct
  • Use field-tested and validated strategies, tools and protocols used in the qualitative and quantitative assessment of collaboration
  • Understand how social network analysis can be utilized to inventory, visualize, and mathematically describe the development of organizational collaboration over time
  • Recognize that an increase in partnerships or "more" is not necessarily better,
  • Learn how the evaluation process and findings are used effectively by practitioners and stakeholders to strengthen their organizational collaboration
  • Recognize, energize, and re-organize patterns of interaction between people and organizations so as to more effectively address complex societal issues

Rebecca Woodland is an Associate Professor of Educational Leadership, and Chair of the Department of Educational, Policy and Administration, at the University of Massachusetts Amherst and has facilitated workshops and courses for adult learners for more than 15 years. She has been fortunate to be a "Dynamic Dozen" honoree, recognized as one of AEA's most effective presenters over the past 10 years. In 2010, 2011 and 2012 Dr. Woodland delivered a version of this workshop for the AEA Summer Evaluation Institute. Dr. Woodland loves creating opportunities at AEA in which all participants experience meaningful learning, find the material useful and relevant, and have fun at the same time.

Workshop 17: Nonparametric Statistics: What to Do When Your Data Breaks the Rules

Offered: Monday, June 27, 2016 from 9:10am-12:30pm; Tuesday, June 28, 2016 from 9:10am- 12:30pm

Speaker: Jennifer Catrambone

Level: Beginner

This session walks participants through nonparametric statistics, techniques designed to be used on small, uneven, or skewed samples. Participants will leave with a handout that clearly identifies situations in which nonparametric statistics should be used, explains when and why they are appropriate, illustrates how to run the techniques in SPSS (including annotated screen shots), how to interpret the output and how to write up the results. Participants are encouraged to email the presenter with the type of work they do in order to have a customized presentation that will be highly relevant. All levels are welcome.

You will learn:

  • Learn the basics of nonparametric stats and how they differ from parametric stats
  • When nonparametric stats are appropriate and why, including how to explain this to others in print or in person
  • The types of nonparametric tests that exist
  • How to evaluate their own stats situation and apply the best-suited nonparametric test
  • How to run the techniques in SPSS (including annotated screen shots)
  • How to interpret the output
  • How to write up the results for publication or presentation

Workshop 18: Focus Group or Qualitative topics

Offered: Monday, June 27, 2016 from 1:40pm-5:00pm; Tuesday, June 28, 2016 from 1:40pm -5:00pm

Speaker: Michelle Revels

Michelle Revels is a technical director at ICF Macro specializing in focus group research and program evaluation. She has taught focus group research methods at both the AEA Annual Conference and Summer Evaluation Institute for multiple years. Revels attended Hampshire College in Amherst, Massachusetts and the Hubert H. Humphrey Institute of Public Affairs at the University of Minnesota.

Workshop 19: Identifying Evaluation Questions

Offered: Tuesday, June 28, 2016 from 1:40pm-5:00pm; Wednesday, June 29, 2016 from 9:10am-12:30pm

Speaker: Lori Wingate

Well-crafted evaluation questions are an efficient and powerful means for clarifying and communicating the focus of an evaluation, yet there is little formal guidance on this aspect of evaluation practice. Participants in this workshop will learn how to develop sound evaluation questions that can serve as a foundation for subsequent decisions about evaluation design, data collection, and data interpretation. The workshop includes hands-on exercises, tools, and resource materials to facilitate participants’ application of the workshop content in their practice. This workshop is designed for beginners, but will be beneficial for more experienced evaluators whose academic preparation did not include training on developing evaluation questions.

You will learn:

  • How to identify and utilize multiple information sources to inform the development of evaluation questions
  • Criteria for evaluation questions and how to apply them in developing or selecting questions to guide an evaluation
  • How to align data collection and data interpretation with evaluation questions to ensure a useful and meaningful evaluation

Audience: Individuals responsible for planning, conducting, or commissioning evaluations.

Workshop 20: Dissemination and Implementation Research: Implications for Evaluation

Offered: Tuesday, June 28, 2016 from 9:10am-12:30pm; Wednesday, June 29, 2016 from 9:10am-12:30pm

Speaker: James Emshoff

This workshop will be focusing on the evaluating efforts associated with the process of moving from pilot or demonstration projects into widespread practice. Participants will discuss issues such as packaging, marketing and dissemination and building capacity.

You will learn  

  • Packaging the program for potential users
  • Marketing/disseminating the program to the potential users
  • Building capacity to select and use effective interventions
  • Understanding the decisions involved in adopting a new program
  • Maintaining quality control over multiple implementation sites
  • Facilitating and measuring implementation
  • Considering the balance between fidelity and adaptation
  • Scaling up efforts to achieve public health impact

Audience: Attendees with a basic background in evaluation.

Dr. Emshoff is an Associate Professor Emeritus of Psychology and former Director of the Community Psychology Program at Georgia State University.   He also founded and serves as Director of Research at EMSTAR Research, Inc., an evaluation and organizational services firm.  He has directed research projects funded by a variety of federal agencies focused on health and well-being.  He has received many honors, including the American Medical Association Substance Abuse Prevention Award.  Dr. Emshoff has conducted evaluation research focused on substance abuse, violence, HIV/AIDS, child abuse, community collaboratives, mentoring, delinquency, health promotion programs and issues of dissemination and implementation of evidence based programs at the local, state, and national levels.  He provides technical assistance in prevention and evaluation to many organizations and serves on the Board of Directors or Executive Committee of several local and national organizations. Approximately 200 of his publications and professional presentations focus on prevention and evaluation issues.  Dr. Emshoff received his Ph.D. in community psychology.  

Workshop 21: Evaluation Capacity Building (ECB): Concepts and tools for use in your organization

Offered: Monday, June 27, 2016 from 1:40pm-5:00pm; Tuesday, June 28, 2016 from 1:40pm-5:00pm

Speaker: Boris Volkov

Level: Mixed (beginner and beyond)

Due to the ever-increasing importance of evaluation information for the organizational planning and decision making processes, there is a clear and strong need for developing and sustaining evaluation capacity within programs and organizations. This workshop will provide you with essential information on evaluation capacity building including key concepts, approaches, tools, and issues. Using the steps and guidelines of the Checklist for Building Organizational Evaluation Capacity (Volkov & King), a practical approach grounded in literature and practice, you will be able to assess and address your program/organizational ECB status, challenges, and strategies.

You will learn:

  • To understand the concepts and important issues in evaluation capacity building
  • To identify and describe approaches to ECB
  • To learn how to strategically apply the Checklist for Building Organizational Evaluation Capacity
  • To explore challenges, opportunities, and strategies of developing evaluation capacity in your program and organization

Audience: The session is aimed at both practitioners and consumers of program evaluation in organizations seeking to enhance their long-term capacity to conduct and use evaluation in everyday activities.

Boris Volkov, PhD, is an Assistant Professor with the Division of Epidemiology & Community Health and a Director for Monitoring & Evaluation at the Clinical & Translational Science Institute, the University of Minnesota School of Public Health. He completed his Evaluation Fellowship at CDC, where he was a team member and co-author of the Multisite Evaluation of Field Epidemiology Training Programs, which received Annual Honor Award of the Center for Global Health for “Excellence in Program or Policy Evaluation” and was recognized by the American Evaluation Association as an “AEA Exemplary Evaluation Project” in 2015. Volkov was involved with a number of domestic and international studies in the areas of public health, health care, epidemiology, education, science, and technology, as well as evaluation capacity building in organizations. He is the founding chair of the Internal Evaluation TIG and a founder of the Organizational Learning & Evaluation Capacity Building TIG of the American Evaluation Association. A frequent presenter at national conferences and an author of publications on evaluation issues, Volkov currently teaches Program Evaluation for Public Health Practice to MPH students.

Workshop 22: UFO's, Bigfoot and Evidence-based Programs: What Counts as Proof for Program Development and Evaluation

Offered: Monday, June 27, 2016 from 1:40pm-5:00pm; Wednesday, June 29, 2016 from 9:10am-12:30pm

Speakers: Michael Schooley; Aisha Tucker-Brown

Level: Advanced Beginner

Regardless of whether you are questioning the existence of UFO’s or trying to identify evidence-based programs, one tenet holds true in program evaluation: “absence of proof is not necessarily proof of absence.” In an era with a strong emphasis on evidence-based programs and targeting investments toward highly effective strategies, the need to appropriately value evidence to evaluate programs is paramount.  However, there is a broad range of what can count as evidence when assessing program merit. This workshop will present a continuum of evidence and framework for assessing evidence to determine best practices. Approaches to assessing and building evidence will be covered, with the relative strengths and weaknesses of various approaches discussed. Participants will have the opportunity to apply the approach to a case example.

You will learn:

  • Continuum of evidence
  • Approaches to assessing extant evidence through review and rating
  • How to build evidence and determine best practice
  • Approaches for translating evidence for a different stakeholders

Audience: This course is designed for individuals who are looking for practical ways to assess, build, and use evidence for decision making.  All participants should have a basic knowledge of program evaluation. 

Michael Schooley is Chief of the Applied Research and Evaluation Branch at the Division for Heart Disease and Stroke Prevention, Centers for Disease Control and Prevention (CDC). Michael has been working with CDC for over 20 years, focusing on program evaluation, performance measurement, policy research, and research translation.

Aisha Tucker-Brown is a Senior Evaluator in the Evaluation and Program Effectiveness Team of the Applied Research and Evaluation Branch at the Division for Heart Disease and Stroke Prevention, Centers for Disease Control and Prevention (CDC). She has more than 10 years of evaluation experience and has spent her eight year tenure at CDC focusing on program evaluation, evaluation design and increasing practice based evidence.

Workshop 23: A Participatory Method for Engaging Stakeholders with Evaluation Findings

Offered: Monday, June 27, 2016 9:10am-12:30pm; Tuesday, June 28, 2016 1:40pm-5:00pm

Speaker: Adrienne Adams

In this workshop, learn how to facilitate the “Expectations to Change (E2C)” process, a six-step, interactive, workshop-based method for guiding evaluation stakeholders from establishing performance standards (i.e., “expectations”) to formulating action steps toward desired programmatic change. The E2C process is designed to engage stakeholders with their evaluation findings as a means of promoting evaluation use and building evaluation capacity. The distinguishing feature of this process is that it is uniquely suited for contexts in which the aim is to assess performance on a set of indicators by comparing actual performance to planned performance standards for the purpose of program improvement. In the E2C process, stakeholders are guided through establishing standards, comparing the actual results to those standards to identify areas for improvement, and then generating recommendation and concrete action steps to implement desired changes. At its core, E2C is a process of self-evaluation and the role of the evaluator is that of facilitator, teacher, and technical consultant. 

You will learn how:

 

  • Establish performance standards
  • Compare evaluation findings to established standards to identify areas for improvement
  • Generate recommendations targeting identified areas for improvement
  • Formulate concrete action steps for implementing recommendations 

 

Audience: Evaluation practitioners and consumers with a basic knowledge of evaluation concepts.

Adrienne Adams is an Associate Professor of Psychology at Michigan State University. She holds a PhD in community psychology. She has evaluated local, state, and national domestic violence and sexual assault victim service programs, including the Department of Defense Domestic Abuse Victim Advocacy Pilot Program. Adrienne also serves as the Director of Evaluation for a large, urban non-profit organization that offers a wide array of supportive programs for victims of sexual assault and domestic violence. She uses participatory evaluation methods to build evaluation capacity and foster organizational learning. She is a board member of the Michigan Association for Evaluation and a member of the American Evaluation Association, and has published in the American Journal of Evaluation. 

Workshop 24: Consulting in Communities (non-profits)

Offered: Tuesday, June 28, 2016 from 9:10am-12:30pm; Wednesday, June 29, 2016 from 9:10am-12:30pm

Speakers: Susan Wolfe; Ann Price

Level: All

This skill-development workshop is designed for evaluators who engage in community-based work and those who consult with nonprofit organizations (including government, school districts, and medical institutions). Through lecture, discussion, and exercises, this hands on, interactive skills development workshop will provide the foundations for practice in effective community engagement.  Workshop presenters will review topics such as the types of personal qualities and professional competencies needed to be effective in community practice.  Participants will complete a personal competencies inventory that will provide insights regarding their own professional developmental needs. Through case studies, attendees will identify ways to work successfully with community-based organizations to facilitate collaboration while navigating resource limitations and community and organizational politics.

You will learn:

  • What community consulting is and how it differs from other types of consulting
  • Personal characteristics and professional competencies needed for success and effectiveness
  • To identify your specific areas of strengths and weaknesses related to personal and professional competencies
  • How to help facilitate collaboration within community based organizations and community members
  • To identify community and organizational barriers to effective community collaboration and ways to deal with these barriers as a community consultant

Audience:  Evaluators who will be working with community-based organizations in any consulting capacity or independent consultants.

Susan M. Wolfe, Ph.D. is a community and developmental psychologist with over 30 years of experience conducting evaluations and working in communities.  She has held jobs with large community-hospital districts; a community college district; a large K-12 school district; universities; a children’s mental health clinic; the federal government; and as a consultant, both independent and for non-profit organizations.  She has worked on research and evaluation projects, longitudinal and cross-sectional, spanning numerous content areas that include health disparities (cancer, maternal child health); nursing homes; homelessness; technological innovation; K-12 through college education; domestic violence and sexual assault; teen pregnancy and adolescent development; mental health and health services.  Dr. Wolfe has published in peer review journals, books, and other venues and presented at numerous national and international meetings.  She is the co-editor (with Victoria Scott) of Community Psychology: Foundations for Practice which was published by Sage Publications in 2015.  Her awards and accolades include the Society for Community Research and Action’s (SCRA) Distinguished Contributions to Community Psychology Practice award; the U.S. Dept. of Health and Human Service Inspector General’s Award for Excellence in Program Evaluation and three Inspector General’s Exceptional Achievement Awards. She earned a PhD in Human Development from the University of Texas at Dallas.  She is currently a Senior Consultant with CNM Connect in Dallas, TX where she provides evaluation and capacity building services to nonprofit organizations and an adjunct faculty member in the Behavioral and Community Health Public Health Program at the University of North Texas Health Science Center.

Ann Webb Price, PhD is the founder and President of Community Evaluation Solutions, a social science evaluation firm based in the metro-Atlanta area. Dr. Price is a community psychologist with almost 20 years of experience designing, implementing, and evaluating community-based organizations, foundations and state and federally funded prevention programs. Prior to working in evaluation, Dr. Price worked in substance abuse addiction treatment and prevention. She conducts evaluations in many areas including education and dropout prevention, substance abuse prevention, youth development, foster care advocacy, child care, community coalitions, and public health. She has evaluated several community coalitions including the Drug Free Coalition of Hall County, the Drug Free coalition of Columbia County, the Michigan Oral Health Coalition, and the Columbia County Family Connections among others. Prior to CES, Dr. Price worked as a Senior Data Analyst at ICF Macro on a national multi-site longitudinal study of the Comprehensive Community Mental Health Services for Children and Their Families (CMHS). Dr. Price earned her Doctorate in Community Psychology from Georgia State University and an M.A. in Clinical Psychology from the University of West Florida. Ann is an active member of the American Evaluation Association and its Community Psychology Topical Interest Group. 

Workshop 25: Data Visualization

Offered: Monday, June 27, 2016 from 9:10am-12:30pm; Tuesday, June 28, 2016 from 9:10am-12:30pm

Speaker: Susan Kistler

Level:  Beginner

In this interactive workshop, learn how to display data in ways that increase understanding, garner attention, and further the purpose of your work. Participants will draw on research in cognition, theories of perception, and principles of graphic design. Through exercises, critical review, and discussion of multiple examples, you will learn how to make crucial design decisions and to justify those decisions based on multiple sources of evidence. Leave with samples, checklists, reading suggestions, and even a bit of chocolate. 

Attendees will learn:

  • Choose from among multiple types of visualizations
  • Accentuate the key 'take home' message in your data
  • Structure a visualization to increase comprehension
  • Leverage data visualizations as tools in multiple contexts

Susan Kistler is an independent consultant and former Executive Director of the American Evaluation Association. Susan has worked with groups in nonprofits, education, government, and business to improve their data collection and use. She brings twenty years of experience as a teacher and trainer to her workshops and was identified as a top facilitator as part of the American Evaluation Association's Potent Presentations Initiative (P2I). 

Workshop 26: Low-Cost/No-Cost Tech Tools for Evaluators

Offered: Monday, June 27, 2016 from 1:40pm-5:00pm; Wednesday, June 29, 2016 from 9:10am-12:30pm

Speaker: Susan Kistler

Level:  All

This fun, fast-paced workshop consists of demonstrations of multiple technology tools used by evaluators. It is not meant to be a comprehensive exploration, but rather an overview of a range of tools that meet the real-world needs of working professionals. Each tool will be demonstrated with enough information for you to decide if it is worth further independent exploration and you will leave with access to where to learn more about each one. The workshop will end with a call to the audience to contribute their favorite tools, so come ready to learn and ready to share. The tools explored in this session are not only appropriate for evaluators, but also for students, researchers, entrepreneurs, and consultants.

You will learn:

  • To manage online resources
  • To gather and analyze data
  • To engage stakeholders in the evaluation process
  • To take your reporting in new directions

Susan Kistler is an independent consultant and former Executive Director of the American Evaluation Association. Susan has worked with groups in nonprofits, education, government, and business to improve their data collection and use. She brings 20 years of experience as a teacher and trainer to her workshops and was identified as a top facilitator as part of the American Evaluation Association's Potent Presentations Initiative.