SUMMER EVALUATION INSTITUTE 2015

 Workshop and Course Descriptions

Workshop 1: Practical Methods for Improving Evaluation Communication

Offered: Sunday, May 31, 2015 from 9:00am-4:00pm

Speaker: Stephanie Evergreen

Level: Beginner

Description: Presenting data effectively makes the difference between reports that are used as doorstops and those that open doors of conversation and action. In this workshop, attendees will learn the science behind presenting data effectively and will leave with direct, pointed changes that can be immediately administered to their own conference presentations and other evaluation deliverables. Beyond the scope of the conference, this workshop will address principles of graph, slideshow, and report design that support legibility, comprehension, and retention of data in the minds of clients. Hands-on work inside the Microsoft Office suite will enhance attendees’ ability to communicate more effectively with peers, colleagues, and clients through a focus on the proper use of color, arrangement, graphics, and text in written evaluation documents. Attendees are strongly encouraged to maximize the workshop experience by bringing laptops for reworking provided sample materials. Instructor will demonstrate on a PC running Office 2013.

You will learn:

  • Visual processing theory and why it is relevant for evaluators
  • Graphic design best practices based in visual processing theory
  • How to apply graphic design best practices and visual processing theory to enhance evaluation dissemination with simple, immediately implementable steps

Audience:  Evaluation practitioners of all levels in all sectors

Stephanie Evergreen runs Evergreen Data, a data presentations consulting firm, and is the eLearning Initiatives Director for the American Evaluation Association (AEA). Her dissertation focused on the extent of cognition-based graphic design in evaluation reports. Evergreen is the founder and past chair of AEA’s Data Visualization and Reporting Topical Interest Group. She publishes a well-read blog at stephanieevergreen.com and her book Presenting Data Effectively was published by Sage in Fall 2013.

 

Workshop 2: Introduction to Evaluation

Offered: Sunday, May 31, 2015 from 9:00am-4:00pm

Speaker: Tom Chapel

Level: Advanced Beginner

Description: This workshop will provide an overview of program evaluation for Institute participants with some, but not extensive, prior background in program evaluation. The session will be organized around the Centers for Disease Control and Prevention’s (CDC) six-step Framework for Program Evaluation in Public Health as well as the four sets of evaluation standards from the Joint Commission on Evaluation Standards. The six steps constitute a comprehensive approach to evaluation. While its origins are in the public health sector, the Framework approach can guide any evaluation. The course will touch on all six steps, but particular emphasis will be put on the early steps, including identification and engagement of stakeholders, creation of logic models, and selection/focus of evaluation questions. Several case studies will be used both as illustrations and as an opportunity for participants to apply the content of the course and work through some of the trade-offs and challenges inherent in program evaluation in public health and human services.

You will learn:

  • A six step framework for program evaluation
  • How to identify stakeholders, build a logic model, and select evaluation questions
  • The basics of evaluation planning

Audience: Attendees with some background in evaluation, but who desire an overview and an opportunity to examine challenges and approaches. Cases will be from public health but general enough to yield information applicable to any other setting or sector.

Tom Chapel is the first Chief Evaluation Officer at the Centers for Disease Control and Prevention. He serves as a central resource on strategic planning and program evaluation for CDC programs and their partners. Before joining CDC in 2001, Tom was Vice-President of the Atlanta office of Macro International (now ICF International) where he directed and managed projects in program evaluation, strategic planning, and evaluation design for public and nonprofit organizations. He is a frequent presenter at national meetings, a frequent contributor to edited volumes and monographs on evaluation, and has facilitated or served on numerous expert panels on public health and evaluation topics.  In 2013, he was the winner of AEA’s Myrdal Award for Government Evaluation.

Workshop 3: Translating Evaluation Findings to Actionable Recommendations

Offered: Monday, June 1, 2015 from 1:40pm-5:00pm; Tuesday, June 2, 2015 from 9:10am-12:30pm 

Speaker: Lori Wingate

Level: Advanced Beginner and Beyond

Description: In this workshop, participants will learn how to convert evaluation results into sound recommendations that align with the purpose of an evaluation and its intended uses. The workshop presents and demonstrates strategies for developing, presenting, and following up on evaluation recommendations—taking into account the myriad of reasons that evaluation recommendations are often not accepted or acted upon. Participants will engage in small-group activities and discussions to apply these strategies to case examples, leaving the session with a solid understanding of how to translate evaluation results into decisions and actions for program improvement.

You will learn how to:

  • Plan for recommendations at the earliest stages of an evaluation
  • Formulate sound recommendations linked to evaluation data and aligned with the purposes of an evaluation
  • Communicate recommendations to ensure they are understandable, relevant, and usable
  • Prepare and support stakeholders for programmatic action planning around the recommendations

Audience: This course is intended for individuals who are responsible for planning, managing, and/or conducting program evaluations (in any sector). Participants should have a basic knowledge of program evaluation.

Lori Wingate is the Assistant Director of The Evaluation Center at Western Michigan University (WMU).  She has a Ph.D. in evaluation and more than 20 years of experience in the field of program evaluation. She directs EvaluATE, the National Science Foundation-funded Advanced Technological Education Evaluation Resource Center, and leads a variety of evaluation projects at WMU focused on STEM education, health, and higher education initiatives. She is an associate member of the graduate faculty at WMU and has led numerous webinars and workshops on evaluation in a variety of contexts. Lori provides technical consultation as a subject matter expert in evaluation to the Center for Global Health and National Center for Immunizations and Respiratory Disease at the Centers for Disease Control and Prevention.

 

Workshop 4: Evaluating Community Coalitions and Partnerships: Methods, Approaches, and Challenges

Offered: Monday, June 1, 2015 from 1:40pm-5:00pm; Tuesday, June 2, 2015 from 9:10am-12:30pm

Speaker: Frances Dunn Butterfoss 

Level: Advanced Beginner

Description: Coalitions involve multiple sectors of the community and implement strategies that focus on policy, systems, and environmental change. The pooling of resources, mobilization of talents, and diverse approaches inherent in a community coalition make it a logical approach for promoting health and preventing disease. A coalition or partnership must evaluate its infrastructure, function, and processes; strategies for achieving its goals; and changes in health/social status or the community.

You will learn:

  • Develop a comprehensive evaluation strategy based on coalition theory
  • Select appropriate short, intermediate, and long-term indicators to measure outcomes
  • Choose appropriate methods and tools
  • Use evaluation results to provide accountability to stakeholders, and improve coalition

Audience: Attendees working in communities with a general knowledge of evaluation terminology and quantitative and qualitative data collection methods.

Frances Dunn Butterfoss is a health educator and President of Coalitions Work, a consulting group that helps communities develop, sustain, and evaluate health promotion/disease prevention coalitions. She is an adjunct professor at Eastern Virginia Medical School and teaches in their MPH program. Fran is the founder of the Consortium for Infant and Child Health (CINCH) and Project Immunize Virginia (PIV). She evaluated Virginia’s Healthy Start coalitions to prevent infant mortality, directed the National Coalition Training Institute, and was funded by the Robert Wood Johnson Foundation for community asthma and health insurance initiatives. She served as Deputy Editor of Health Promotion Practice, and her texts, Coalitions and Partnerships in Community Health and Ignite!, are valued by practitioners and academics alike.

 

Workshop 5: An Executive Summary is Not Enough: Effective Evaluation Reporting Techniques

Offered: Monday, June 1, 2015 from 1:40pm-5:00pm; Tuesday, June 2, 2015 from 1:40pm-5:00pm

Speaker: Kylie Hutchinson

Level: Beginner

Description: As an evaluator you are conscientious about conducting the best evaluation possible, but how much thought do you give to communicating your results effectively? Do you consider your job complete after submitting a final report? Reporting is an important skill for evaluators who care about seeing their results disseminated widely and recommendations actually implemented, but there are alternatives to the traditional lengthy report. This interactive workshop will present an overview of four key principles for effective reporting and engage participants in a discussion of its role in effective evaluation. Participants will leave with an expanded repertoire of innovative reporting techniques and will have the opportunity to work on a real example in groups.

You will learn:

  • The role of communication and reporting in good evaluation practice
  • Three key principles for communicating results effectively
  • Four alternative techniques for communicating your results

Audience: Evaluation practitioners of all levels in all sectors.

Kylie Hutchinson is the principal of Community Solutions Planning & Evaluation, a consulting firm specializing in evaluation, program planning, and program sustainability. She has over 25 years' experience in the field of evaluation and has conducted numerous evaluations in the areas of health, human services, and others. In addition to her direct consulting practice, Kylie is a popular trainer known for her engaging and informative workshops on evaluation. For many years she delivered the Canadian Evaluation Society’s Essential Skills Workshop Series in British Columbia, and is also a contract instructor with the Justice Institute of British Columbia’s Instructor Development Program.

 

­­­­­­­­­Workshop 6: It’s not the Plan, It’s the Planning: Strategies for Evaluation Plans and Planning

Offered: Monday, June 1, 2015 from 9:10am-12:30pm; Tuesday, June 2, 2015 from 9:10am-12:30pm 

Speaker: Sheila Robinson

Level: Beginner, Advanced Beginner

Description: "If you don’t know where you’re going, you’ll end up somewhere else.", Yogi Berra. Few evaluation texts explicitly address the act of evaluation planning as independent from evaluation design or evaluation reporting. This interactive session will introduce you to an array of evaluation activities that comprise evaluation planning and preparing a comprehensive evaluation plan. You will leave with an understanding of how to identify stakeholders and primary intended users of evaluation, the extent to which they need to understand, and the ability to describe the evaluand (the program), strategies for conducting literature reviews, strategies for developing broad evaluation questions, considerations for evaluation designs, and using the Program Evaluation Standards and AEA’s Guiding Principles for evaluators in evaluation planning.

You will be introduced to a broad range of evaluation planning resources including templates, books, articles, and websites.

You will learn:

  • Types of evaluation activities that comprise evaluation planning
  • Potential components of a comprehensive evaluation plan
  • Considerations for evaluation planning (i.e. client needs, collaboration, procedures, agreements, etc.)

Audience: Evaluation practitioners with some background in evaluation basics.

Sheila Robinson is a Grant Coordinator and Instructional Mentor for Greece Central School District, and Adjunct Professor at the University of Rochester’s Warner School of Education. Her background is in special education and professional development and she is a certified Program Evaluator. Her work for the school district centers on professional development, teacher leadership, and evaluation. At Warner School, she teaches graduate courses in Program Evaluation Methods and Designing and Evaluating Professional Development and supervises the Program Evaluation Practicum (for students working toward a Certificate in Program Evaluation). She is Lead Curator of AEA365 Tip-A-Day By and For Evaluators, and is a past Program Chair of AEA's PK-12 Educational Evaluation TIG.

  

Workshop 7: Every Picture Tells a Story: Flow Charts, Logic Models, LogFrames, Etc. What They Are and When to Use Them

Offered: Monday, June 1, 2015 from 9:10am-12:30pm

Speaker: Thomas Chapel

Level: Advanced Beginner

Description: A host of visual aids are in use in planning and evaluation. This session will introduce you to some of the most popular ones—with an emphasis on flow charts, logic models, project network diagrams, and LogFrames. We will review the content and format of each tool and then compare and contrast their uses so that you can better match specific tools to specific program needs. We will review simple ways to construct each type of tool and work through simple cases both as illustrations and as a way for you to practice the principles presented in the session

You will learn:

  • How to identify the proper visual aid tools for program needs
  • How to construct each type of tool

Audience:  Participants should have prior familiarity with evaluation terminology and some experience in constructing logic models.

Thomas Chapel is the first Chief Evaluation Officer at the Centers for Disease Control and Prevention. He serves as a central resource on strategic planning and program evaluation for CDC programs and their partners. Before joining CDC in 2001, Tom was Vice-President of the Atlanta office of Macro International (now ICF International) where he directed and managed projects in program evaluation, strategic planning, and evaluation design for public and nonprofit organizations. He is a frequent presenter at national meetings, a frequent contributor to edited volumes and monographs on evaluation, and has facilitated or served on numerous expert panels on public health and evaluation topics.  In 2013, he was the winner of AEA’s Myrdal Award for Government Evaluation.

 

Workshop 8: Culturally Responsive Evaluation (CRE): Theory to Practice and Back Again

Offered: Monday, June 1, 2015 from 9:10am-12:30pm; Tuesday, June 2, 2015 from 9:10am-12:30pm

Level: Intermediate

Speakers: Rodney K. Hopson, Karen E. Kirkhart

Description: This workshop addresses the theory that grounds Culturally Responsive Evaluation (CRE) and the strategies that bring it to life in evaluation practice. Following opening introductions, presenters set the context with a brief discussion of the centrality of culture in evaluation and the history of how the evaluation profession is coming to a clearer appreciation of culture. Against this backdrop, CRE’s development is highlighted and key elements of the CRE framework are identified.

The workshop then transitions from theory to practice in three segments. The first segment pairs analysis of evaluation contexts with reflections on one’s own cultural location as an evaluator. This prepares participants in the second segment to consider methods that are culturally congruent with their contexts of practice.  Potential strengths and limitations of each will be discussed. CRE values the return of benefit to the community, and the third segment examines both methods and issues in communicating findings and promoting their culturally relevant use. Presenters pair examples from the literature with participants’ own examples to connect workshop content with participants’ contexts, interests, and concerns. In closing, the workshop returns to Big Picture issues such as the fundamental grounding of CRE in social justice and how this poses important metaevaluation questions that connect to both ethics and validity.

You will learn:

  • Describe key elements of culturally responsive evaluation theory that can improve the quality of evaluation in diverse settings
  • Apply strategies of culturally responsive evaluation to each stage of evaluation practice, strengthening the validity of understandings
  • Develop questions about the contexts in which you are working that will promote discourse on cultural relevance and power
  • Assess your own individual cultural locations and describe how these influence the design choices you make in your evaluation work
  • Describe the connections among validity, ethics, and equity to improve evaluation’s ability to support social justice

Audience: This workshop is designed for practicing evaluators of all levels of experience. Basic understanding of the evaluation process is assumed, but it is not necessary that participants have conducted an evaluation on their own. Participants are encouraged to come with a program context in mind so that audience examples can be used throughout the workshop.

Rodney K. Hopson is Professor, Division of Educational Psychology, Research Methods, and Education Policy, College of Education and Human Development, George Mason University.  He received his Ph.D. from the Curry School of Education, University of Virginia and has done post-doctoral/sabbatical studies in the Faculty of Education, University of Namibia, the Johns Hopkins Bloomberg School of Public Health and Centre of African Studies, Cambridge University.

Karen E. Kirkhart is Professor, School of Social Work, David B. Falk College of Sport and Human Dynamics, Syracuse University. She is an affiliated faculty member of the Center for Culturally Responsive Evaluation and Assessment (CREA) at the University of Illinois, Urbana-Champaign. She holds a Ph.D. in Social Work and Psychology from The University of Michigan. Rodney and Karen have each served as President of the American Evaluation Association, and both are actively involved in education and scholarship on culture, diversity, and social justice in evaluation.

 

Workshop 9: Qualitative Approaches to Evaluation: Core Concepts for Defensible Practice

Offered: Monday, June 1, 2015 from 9:10am-12:30pm; Tuesday, June 2, 2015 from 9:10am-12:30pm

Speaker: Jennifer C. Greene

Level:  Beginner

Description: This workshop offers an introduction to the core stances and principles of qualitative approaches to evaluation, with a focus on overall design and the collection, analysis, and reporting of interview and observation data.

You will learn:

  • Initial competencies in the basic rationales for and strategies of qualitative evaluation

Audience:  All evaluators interested in developing some basic competencies in qualitative methods and thinking.

Jennifer C. Greene is a Professor of Educational Psychology at the University of Illinois at Urbana-Champaign, with expertise in social science methodology and in the theory and practice of educational program evaluation. Her work focuses on the intersection of social science methodology and social policy and aspires to be both methodologically innovative and socially responsible. Dr. Greene has held leadership positions in the American Evaluation Association and the American Educational Research Association, including the presidency of AEA in 2011.

 

Workshop 10: Needs Assessment - Basic Ideas, a Guiding Model, and Hands-on Work with Several Interesting Methods

Offered: Tuesday, June 2, 2015 from 1:40pm-5:00pm; Wednesday, June 3, 2015 from 9:10am-12:30pm

Speaker: James W. Altschuld

Level:  Basic to Intermediate

Description:  After establishing the ABCs of needs assessment (NA) such as concepts, terms, a guiding model, and a brief overview of methods often employed in the NA process, participants will be involved in several hands-on activities to demonstrate what they entail and how they bring the NA process to life.  Throughout the sessions questions and active discussion will be encouraged and welcomed.

You will learn:

  • Definitions of terms and concepts such as need, needs assessment, and discrepancies
  • A three phase model for assessing needs
  • An overview of most of the methods used in assessing needs
  • How to use several methods

James W. Altschuld is Professor Emeritus and a member of the Emeritus Academy at The Ohio State University, (OSU) where he taught research methods and program evaluation for 28 years.  His undergraduate and master’s degrees are in Chemistry (Case Western Reserve University, OSU) with a doctorate in Educational Development from the latter institution.   He has extensive publications in evaluation with emphases on needs assessment (8 books, many research articles) and the evaluation and credentialing of evaluators. Among his numerous awards include the Alva and Gunnar Myrdal Award from the American Evaluation Association for contributions to the field.  He has presented many times on NA from local to international venues and has led many workshops on the topic.

 

Workshop 11: Twelve Steps of Quantitative Data Cleaning: Strategies for Dealing with Dirty Evaluation Data

Offered: Monday, June 1, 2015 from 9:10am-12:30pm; Monday, June 1, 2015 from 1:40pm-5:00pm

Speaker: Jennifer Ann Morrow

Level: This course is intended for beginner/intermediate evaluators. 

Description: Evaluation data, like a lot of research data, can be messy. Rarely are evaluators given data that is ready to be analyzed. Missing data, coding mistakes, and outliers are just some of the problems that evaluators should address prior to conducting analyses for their evaluation report. Even though data cleaning is an important step to data analysis, the topic has received little attention in literature, and the resources that are available in literature tend to be complex and are not always user friendly.

In this workshop, you will go step-by-step through the data cleaning process and learn suggestions for what to do at each step.

You will learn:

  • The recommended 12 steps are for cleaning dirty evaluation data
  • Suggestions for ways to deal with messy data at each step
  • Methods for reviewing analysis outputs and making decisions regarding data cleaning options

Audience: This workshop is for novice and experienced evaluators.

Jennifer Ann Morrow is an Associate Professor in Evaluation, Statistics and Measurement at the University of Tennessee with more than 17 years of experience teaching evaluation and statistics at the undergraduate and graduate level. She is currently working on a book about the 12-steps of data cleaning.

 

Workshop 12: Process Evaluation: What You Need To Know and How to Get Started

Offered: Monday, June 1, 2015 from 9:10am-12:30pm; Monday, June 2, 2015 from 1:40pm-5:00pm

Speaker: Laura A. Linnan

Level: Beginner/Intermediate

Description: Process evaluation results are the key to understanding why and how interventions work (or fail to work). This workshop will review some key process evaluation terminology; review a systematic approach for establishing a comprehensive process evaluation effort; and apply these techniques in a small group activity where workshop participants will have a chance to try out these skills with a real-world public health intervention. In this highly interactive workshop, we will also discuss key process evaluation challenges, lessons learned, and strategies for overcoming them with an aim toward improved intervention implementation and effectiveness.

Audience: This workshop is for evaluators of public health interventions.

Laura A. Linnan, ScD, Professor, Department of Health Behavior, UNC Chapel Hill Gillings School of Global Public Health and Director, Carolina Collaborative for Research on Work and Health. Her major focus is the design, implementation and evaluation of interventions designed to address chronic disease disparities, including cancer, CVD, arthritis, and diabetes.  She uses mixed method approaches grounded in community-based participatory research principles.  She has been Principal Investigator of more than thirty worksite and other community-based intervention or evaluation studies in beauty salons, barbershops, public libraries and worksites. She has published over 100 manuscripts and book chapters, and co-edited (with Allan Steckler) Process Evaluation for Public Health Intervention and Research (2002), Jossey-Bass, Inc.

 

­­­­­Workshop 13: RealWorld Evaluation: Practical Approaches for Conducting Evaluation in Spite of Constraints

Offered: Monday, June 1, 2015 from 9:10am-12:30pm; Wednesday, June 3, 2015 from 9:10am-12:30pm

Speaker: Jim Rugh

Level: Mixed

Description: An evaluator coming from a research background may find it challenging to cope with a number of constraints when asked to design and conduct an evaluation of a ‘real-world’ program. Typical constraints include lack of comparable baseline data, much less data on a comparison group, and insufficient time or budget allocated by clients.  How can you conduct adequately valid evaluations under such circumstances?   The facilitator of this workshop will summarize the approaches advocated in the RealWorld Evaluation book, and share examples from his extensive international experiences.  He will emphasize the need for more holistic and practical approaches to impact evaluation.

You will learn:

  • Identify seven basic steps for planning, conducting and sharing the results of an evaluation
  • How to produce adequately reliable and useful evaluations in spite of inadequate budget, insufficient time, lack of baseline or comparison data, and expectations by stakeholders

Audience:  This workshop is intended for people responsible for commissioning or conducting program evaluations.

Jim Rugh, co-author of the RealWorld Evaluation book (see www.RealWorldEvaluation.org), has facilitated workshops on this subject for many professional evaluation organizations in many countries. Jim has been involved in international development for 50 years (in addition to a childhood in India), including 34 years as a specialist in program evaluation.  Since retiring as the Director of Evaluation for CARE International in 2007, he has been asked to provide advice and training to many different international development agencies.  He currently serves as the Coordinator of the EvalPartners Initiative, a collaborative undertaking of IOCE, UNICEF and many other partner organizations.

 

Workshop 14: Conflict Resolution Skills for Evaluators

Offered: Tuesday, June 2, 2015 from 1:40pm-5:00pm; Wednesday, June 3, 2015 from 9:10am-12:30pm

Speaker: Jeanne Zimmer

Level: Beginner, Advanced Beginner

Description: Unacknowledged and unresolved conflict can challenge even the most skilled evaluators. Conflict between evaluators and clients and among stakeholders creates barriers to successful completion of the evaluation project. This workshop will delve into ways to improve listening, problem solving, communication, and facilitation skills and introduce a streamlined process of conflict resolution that may be used with clients and stakeholders.

Through a hands-on, experiential approach using real-life examples from program evaluation, you will become skilled at the practical applications of conflict resolution as they apply to situations in program evaluation. You will have the opportunity to assess your own approach to handling conflict and to build on that assessment to improve your conflict resolution skills.

You will learn:

  • The nature of conflict in program evaluation and possible positive outcomes
  • How to incorporate the five styles of conflict-resolution as part of reflective practice
  • Approaches to resolving conflict among stakeholders with diverse backgrounds and experiences
  • Techniques for responding to anger and high emotion in conflict situations
  • To problem solve effectively, including win-win guidelines, clarifying, summarizing, and reframing
     

Jeanne Zimmer served as Executive Director of the Dispute Resolution Center since 2001 and is completing a doctorate in evaluation studies with a minor in conflict management at the University of Minnesota. For over a decade, she has been a very well-received professional trainer in conflict resolution and communications skills. 

Workshop 15: Using Theory to Improve Evaluation Practice

Offered: Tuesday, June 2, 2015 from 1:40pm-5:00pm

Speaker: Stewart Donaldson

Level: Beginner

Description: This workshop is designed to provide evaluators with an opportunity to improve their understanding of how to use theory to improve evaluation practice. We will examine social science theory and stakeholder theories, including theories of change and their application to making real improvements in how evaluations are framed and conducted. Lecture, exercises, and discussions will help participants learn how to apply evaluation theories, social science theories, and stakeholder theories of change to improve the accuracy and usefulness of evaluations. A wide range of examples from evaluation practice will be provided to illustrate main points and key take-home messages.

You will learn:

  • To define and describe evaluation theory, social science theory, and program theory
  • How evaluation theory can be used to improve evaluation practice
  • How implicit and explicit social science theories can be used to guide evaluation decisions
  • The components and processes of several commonly used social science theories that have been used to develop and evaluate interventions
  • How developing stakeholder theories of change can be used to improve evaluation practice

Audience: This workshop is intended for evaluators working in any context.

Stewart Donaldson is President of AEA, Professor & Dean of the School of Social Science, Policy & Evaluation and the School of Community and Global Health, and Director of the Claremont Evaluation Center at Claremont Graduate University.  He has been published widely on the topic of applying evaluation and program theory, developed one of the largest university-based evaluation training programs, and has conducted developmental, formative, and summative evaluations for more than 100 organizations during the past two decades.

 

Workshop 16: Beyond the Basics of Program Design: A Theory-Driven Approach

Offered: Tuesday, June 2, 2015 from 9:10am-12:30pm

Speakers: Stewart Donaldson, John Gargani

Level: Intermediate

Description: A strong program design is critical to the success of social, health, educational, organizational, and other programs. Consequently, evaluators with strong design skills can improve a program’s chances of success and the quality of their evaluations by taking an active part in the design process.

Building on the popular beginner workshop “Basics of Program Design,” this hands-on workshop will help you take your program design skills to the next level. You will learn how to work more effectively with stakeholders in collaborative settings, and how that can yield stronger, more useful evaluations. Mini-lectures interspersed with small-group activities will help you apply and understand the concepts presented. Examples from evaluation practice will be provided to illustrate main points and key take-home messages, and you will receive a handout of further resources.

You will learn:

  • How to develop, refine, and integrate all the elements of a program design from a theory-driven evaluation perspective
  • How to ensure that stakeholder values are embedded in the program
  • How to connect program activities with program purposes in a detailed, comprehensive way
  • How to use a program design to craft comprehensive monitoring and evaluation systems
  • How to identify roles evaluators can play in a collaborative design process
  • How to describe and address the common challenges and professional and ethical issues involved with evaluators designing or improving programs 

Stewart Donaldson is President of AEA, Professor & Dean of the School of Social Science, Policy & Evaluation and the School of Community and Global Health, and Director of the Claremont Evaluation Center at Claremont Graduate University.  He has published widely on the topic of applying evaluation and program theory, developed one of the largest university-based evaluation training programs, and has conducted developmental, formative, and summative evaluations for more than 100 organizations during the past two decades.

John Gargani is the President and Founder of Gargani + Company, Inc., a program design and evaluation firm located in Berkeley, California. When he is not helping nonprofit organizations, foundations, corporations, and government agencies achieve their social missions, he is writing about evaluation, sharing his thoughts on the field at EvalBlog.com, speaking at conferences around the world, and conducting workshops to train the next generation of evaluators. Over the past 20 years, his work has taken him to diverse settings, including public housing projects, museums, countries adopting free market economies, and 19th century sailing ships. He has designed innovative social enterprises; directed large-scale randomized trials; and created novel technologies that measure how people think.

 

Workshop 17: Advanced Cost-Effectiveness Analysis for Health and Human Service Programs

Offered: Tuesday, June 2, 2015 from 9:10am-12:30pm; Wednesday, June 3, 2015 from 9:10am- 12:30pm

Speaker: Edward Broughton

Level: Advanced

The relentless drive for health and human service programs to be more efficient and affordable demands robust economic analyses so policymakers can make informed decisions.

This workshop builds on basic knowledge and skills in cost-effectiveness analysis (CEA) to help you understand the workings of more realistic economic models that take into account uncertain data and changing circumstances. You will learn what sensitivity analysis is, what a Markov model looks like and how to use probability modeling. By the end of the session, you will be able to conduct your own basic cost-effectiveness analysis and interpret and communicate its results. You will also understand more complex economic analyses of health and human service programs and possess the basic framework upon which you can develop further skills in this area. The presentation will be a highly interactive to maximize class participation.

You will learn:

  • How to do a CEA that is relevant to the work you are involved in
  • How to develop a model that accounts for the uncertainty of the inputs to it
  • How to interpret cost-effectiveness acceptability curves
  • What a Markov model is used for and how it works
  • How to effectively interpret and communicate the results of a CEA

Audience: Evaluation practitioners who have some experience in the field and who would like to become familiar with cost-effectiveness and other forms of economic evaluation. You do not need to have any experience in cost-effectiveness analysis for this session.

Edward Broughton is Director of Research and Evaluation on the USAID ASSIST Project with University Research Co. He previously served as adjunct faculty at Mailman School of Public Health at Columbia University, teaching about economic analyses, health economics, research methods for health policy and management, and decision analysis.

Workshop 18: Systems as Program Theory and as Methodology: A Hands on Approach over the Evaluation Life Cycle

Offered: Tuesday, June 2, 2015 from 1:40pm-5:00pm; Wednesday, June 3, 2015 from 9:10am -12:30pm

Speaker: Jonathan A. Morell PhD

Level: Beginner/intermediate. Attendees must have some knowledge and experience in designing and conducting evaluation.

Description : This workshop will provide an opportunity to learn how to use a system approach when designing and conducting evaluation. The presentation will be practical. It is intended to give participants a hands-on ability to make pragmatic choices about developing and doing evaluation. Topics covered will be: 1) What do systems “look like” in terms of form and structure?  2) How do systems behave? 3) How can systems be used to develop program theory, as a methodology, and as a framework for data interpretation? 4) How should a systems approach be used along different parts of an evaluation life cycle – from initial design to reporting? The workshop will be built around real evaluation cases. Participants will be expected to work in groups to apply the material that will be presented.

You will learn:

  • The form and structure of systems
  • How systems behave
  • Relevance of system form and behavior for program theory, methodology, and data interpretation
  • Use of systems along the entire evaluation life cycle – from initial design to reporting

Audience: This workshop is designed for two groups of people. The first is evaluators who are engaged in designing or executing evaluation. The second is comprised of people who would benefit from insight on how system structure and behavior may affect programs they are funding, designing, or implementing.

Jonathan A. Morell PhD (Jonny) is an organizational psychologist with extensive experience in the theory and practice of program evaluation. His current hands-on evaluations involve safety programs in industry, evaluating R&D, programs to minimize distracted driving, applying best practices in development, and evaluation capacity building. Based on his practical evaluation experience Jonny has made theoretical contributions to the field involving evaluation methods for programs that exhibit unexpected behaviors; and the application of complex systems to support methodology, data interpretation, and theory building. His views are set out in his book: Evaluation in the Face of Uncertainty: Anticipating Surprise and Responding to the Inevitable, in articles available on his website (jamorell.com), and postings on his blog (evaluationuncertainty.com). He is the Editor-in-Chief of the journal Evaluation and Program Planning, and a recipient of AEA’s Paul F. Lazarsfeld Evaluation Theory Award. His mantra is: Respect data, trust judgment.

 

Workshop 19: Development and Use of Indicators for Program Evaluation

Offered: Tuesday, June 2, 2015 from 9:10am-12:30pm ; Tuesday, June 2, 2015 from 1:40pm-5:00pm

Speaker: Goldie MacDonald

Description: The selection of indicators for use in program evaluation can be complex and time-consuming.  Moreover, stakeholders who are expected to participate in this work come to the discussion with varying levels of knowledge relevant to the program and its evaluation.  So, how do we identify and select good indicators?  How do we encourage full participation of stakeholders in this dialogue?  The workshop includes discussion of criteria for selection of indicators based on more than a decade of work with stakeholders to plan and implement indicators-based evaluations in domestic and international settings.

You will learn:

  • Define and differentiate between indicators and related terms
  • Explain the necessary alignment of indicators to other critical elements of evaluation design (e.g., purpose of the evaluation, evaluation questions) and the possible consequences of misalignment
  • Describe and construct both process and outcome indicators
  • Explain the importance of basic literature searches to indicator development and use
  • Review examples of operational definitions that should accompany indicators to be used in an evaluation
  • Explore options for collaborative work with stakeholders to assess and select indicators
  • Recognize common mistakes or practice traps in the development and use of indicators (and understand how to avoid them)

Goldie MacDonald, PhD is a Health Scientist in Center for Global Health (CGH) at the U.S. Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia.  Dr. MacDonald provides expertise in program evaluation planning, implementation, and preparing for use of findings in the context of public health.  Since joining the CDC in 1999, she co-led design and implementation of a multisite evaluation of the Field Epidemiology Training Program (FETP) in 10 countries; co-led design and implementation of The National Inventory of Core Capabilities for Pandemic Influenza Preparedness and Response to evaluate changes in flu preparedness in 36 countries; and, led design and implementation of an evaluation of the Steps to a HealthierUS Cooperative Agreement Program in 42 communities in the U.S.  She is the lead author of Introduction to Program Evaluation for Comprehensive Tobacco Control Programs.  For their work on this resource, the authors received the Alva and Gunnar Myrdal Award for Government from the American Evaluation Association (AEA) in November 2002.  Currently, she teaches program evaluation for the CDC and partner organizations at regional training events in Africa and Asia.

 

Workshop 20: Smart Data Visualization

Offered: Monday, June 1, 2015 from 9:10am-12:30pm

Speaker: Stephanie Evergreen

Level: Beginner

Description: Crystal clear charts and graphs are valuable – they save an audience’s mental energies, keep a reader engaged, and make you look smart. You can achieve that level of smart data visualization with a working knowledge of fundamental design principles and a little bit of elbow grease. In this workshop, you will learn the research-based best practices that inform smart data visualization. We will focus on the fundamentals of good graph design—simplification and emphasis. The presenter will show how to tweak software default settings to design visualizations with an impact. You are strongly encouraged to bring a laptop running Excel for in-workshop visualization of provided data. The instructor will be demonstrating on a PC running Excel 2013 with limited technical support for Excel on Macs. You will leave the workshop with simple steps that you can immediately implement to enhance your data visualizations.

You will learn:

  • How to optimally format simple graphs
  • How to use Excel to make new graph types
  • How to choose the right graph to tell your story

Audience:  Anyone who regularly graphs data to communicate with others.

Stephanie Evergreen runs Evergreen Data, a data presentations consulting firm, and is the eLearning Initiatives Director for the American Evaluation Association (AEA). Her dissertation focused on the extent of cognition-based graphic design in evaluation reports. Evergreen is the founder and past chair of AEA’s Data Visualization and Reporting Topical Interest Group. She publishes a well-read blog at stephanieevergreen.com and her book Presenting Data Effectively will be published by Sage in Fall 2013.

 

Workshop 21: Logic Models as a Practical Tool in Evaluation and Planning

Offered: Monday, June 1, 2015 from 1:40pm-5:00pm

Speaker: Tom Chapel

Description: The logic model, as a map of what a program is and intends to do, is a useful tool in both evaluation and planning and, as importantly, for integrating evaluation plans and strategic plans.  In this session, we will recapture the utility of program logic modeling as a simple discipline, using cases in public health and human services to explore the steps for constructing, refining and validating models. We will then examine how to use these models both prospectively for planning and implementation as well as retrospectively for performance measurement and evaluation.   We will illustrate the value of simple and more elaborate logic models using small group case studies

You will learn:

  • To construct simple logic models
  • To use program theory principles to improve a logic model
  • To employ a model to identify and address planning and implementation issues

Audience:  This course is interned for advanced beginners. 

Tom Chapel is the first Chief Evaluation Officer at the Centers for Disease Control and Prevention. He serves as a central resource on strategic planning and program evaluation for CDC programs and their partners. Before joining CDC in 2001, Tom was Vice-President of the Atlanta office of Macro International (now ICF International) where he directed and managed projects in program evaluation, strategic planning, and evaluation design for public and nonprofit organizations. He is a frequent presenter at national meetings, a frequent contributor to edited volumes and monographs on evaluation, and has facilitated or served on numerous expert panels on public health and evaluation topics.  In 2013, he was the winner of AEA’s Myrdal Award for Government Evaluation.

 

Workshop 22: Introduction to Communities of Practice (CoPs)

Offered: Tuesday, June 2, 2015 from 1:40pm-5:00pm ; Wednesday, June 3, 2015 from 9:10am-12:30pm

Speakers: Leah Christina Neubauer, Thomas Archibald

Level: Beginner

Description: This interactive workshop will introduce Communities of Practice (CoPs) and its application for evaluators and evaluation. CoPs are designed to engage learners in a process of knowledge constructed around common interests, ideas, passions, and goals - the things that matter to the people in the group.  Through identifying the three core CoP elements (domain, community and practice), members work to generate a shared repertoire of knowledge and resources.  CoPs can be found in many arenas: corporations, schools, non-profit settings, within evaluation designs, and in local AEA affiliate practice. This session will explore CoP development and implementation for a group of evaluators focused on understanding experience, increasing knowledge, and ultimately, improving evaluation practice.  Session facilitators will also highlight examples from the fields of evaluation, public health and adult education and involve participants in a series of hands-on inquiry-oriented techniques.

You will learn:

  • Key theories and models guiding Communities of Practice (CoPs)
  • The 10 essential fundamentals of developing and sustaining a Community of Practice (CoPs)
  • CoP methodologie including: stor telling,  arts-based, collaborative inquiry, evaluative thinking, and critical self-reflection

Audience: This presentation in intended for all evaluators

Leah Christina Neubauer has been working in the field of public health as an educator, evaluator, and researcher for the last fifteen years. Her research is focused on the intersections of critical theory with responsive evaluation practice, sexual health promotion, and continuing/professional education.  Leah has collaborated with many global (Kenya-based), national, state and local partners on a variety of endeavors.  She has delivered numerous presentations and co-authored publications on global public health and community-based evaluation, training and research. Her dissertation, The Critically Reflective Evaluator, identified essential qualities and characteristics of evaluator-formed CoPs.  She is currently the President of the AEA Affiliate – the Chicagoland Evaluation Association and is a Steering Committee member of the AEA Local Affiliate Collaborative (LAC). She is a full-time Instructor and Program Manager in DePaul University’s MPH program.  She is also the founding Executive Director of the global NGO, The Rafiki Collaborative and Education, Research & Action.  She received her EdD in Adult and Continuing Education in 2013 from National Louis University.

Thomas Archibald is an Assistant Professor and Extension Specialist in the Department of Agricultural, Leadership, and Community Education at Virginia Tech. His research and practice focus on program evaluation, evaluation capacity building (especially regarding the emergent notion of “evaluative thinking”), and research-practice integration, focusing specifically on contexts of Cooperative Extension and community education. He has facilitated numerous capacity building workshops around the United States and in sub-Saharan Africa. Archibald is a recipient of the 2013 Michael Scriven Dissertation Award for Outstanding Contribution to Evaluation Theory, Method, or Practice for his dissertation on the politics of evidence in the “evidence-based” education movement. He is a Board Member of the Eastern Evaluation Research Society and a Program Co-Chair of the AEA Organizational Learning and Evaluation Capacity Building Topical Interest Group. He received his PhD in Adult and Extension Education in 2013 from Cornell University, where he was a graduate research assistant in the Cornell Office for Research on Evaluation under the direction of Bill Trochim. 

 

Workshop 23: Integrating Marketing Strategies: Social Media Evaluation and Reporting

Offered: Monday, June 1 1:40 pm- 5:00pm; Tuesday, June 2, 2015 9:10am-12:30pm; 

Speaker: Joseph Smyser

Level: Mixed

Description: This workshop will get straight to the point. The big players are using Facebook, Twitter, and YouTube in their campaigns, but how are they reporting performance? How are they integrating these sites into their larger campaign evaluation strategy? This workshop will use timely, relevant examples from both public health and private industry marketing campaigns. Best practices in integrating social media into reporting and monitoring and evaluation strategies will be shared. Attendees will see current reporting templates from several different campaigns and organizations, and will be shown data collection and monitoring tools currently employed by government and industry.

You will learn:

  • A list of tools and techniques currently in use by government and industry campaigns
  • How to integrate social media into a larger campaign strategy

Audience: Those involved in the strategic planning and monitoring and evaluation of communication campaigns that employ social media.

Dr. Joe Smyser helped design the monitoring and evaluation strategy for the U.S. Centers for Disease Control and Prevention's Tips From Former Smokers campaign. He was also the Director of Strategy and Partnerships for Outkast, the multiple Grammy award winning hip hop duo. He is currently the Director of Integrated Marketing Strategies for Rescue SCG, implementing the U.S. Food and Drug Administration's national anti-tobacco campaign.

 

Workshop 24: Developmental Evaluation Design Studio

Offered: Monday, June 1, 2015 from 9:10am-12:30pm; Monday, June 1, 2015 from 1:40pm-5:00pm 

Speaker: Karen Minyard, Ph.D., Tina Anderson Smith

Description: Have you heard about Developmental Evaluation and are looking for concrete ways to apply the concepts in your own practice?  This session will review the definitive attributes and applications of Developmental Evaluation – namely using real-time learning to inform programs and create a continuous feedback loop for improvement and learning - and provide tools that will help you translate the ideas into action.  Instrumental tools and topics may include – systems thinking, realist evaluation, and adaptive leadership.  This workshop will be highly interactive and participatory.  Come ready to design and apply to your own initiatives.

You will Learn:

  • What developmental evaluation is and how it is different than other methods of evaluation;
  • What conditions are most appropriate for using developmental evaluations;
  • Tips and tools for incorporating developmental evaluation practices into an evaluation;
  • Practical examples of developmental evaluations in practice through the use of case studies.

Karen Minyard, Ph.D. has directed the Georgia Health Policy Center (GHPC) at Georgia State University’s Andrew Young School of Policy Studies since 2001. Dr. Minyard connects the research, policy, and programmatic work of the center across issue areas including: community and public health, long-term care, child health, health philanthropy, public and private health coverage, and the uninsured.  Glenn Landers and Rachel Ferencik, from the Georgia Health Policy Center, will also co-teach.

Tina Anderson Smith is an independent consultant whose 22-year career has been focused on supporting and evaluating health system change at local, state, and national levels.  Specifically, her experience includes designing, managing, and studying the impacts of complex multisite interventions.  With a particular interest in measuring and maximizing the impact of community-level technical assistance, Ms. Smith provides facilitation, planning, research, and evaluation support to aid public and private organizations in designing more effective technical assistance programs.  

 

Workshop 25: Keeping it REAList: Making sense of “What works? For Whom? How? and Under What Conditions?” using a Realist Evaluation Approach

Offered: Wednesday, June 3, 2015 from 9:10am-12:30pm

Speakers: Tina Anderson Smith

Description: Are you engaged in evaluations of complex social programs - ones in which interventions vary across place and time leading to a wide range of outcomes?  Are you faced with the challenge of credibly describing the influence of contextual factors on program outcomes?  Are you eager to make sense of how programs work in addition to whether they produce desired results?  REAList Evaluation offers a theory-driven approach to “explaining” how contextual attributes and behaviors triggered by an intervention combine to produce a complex signature of outcomes.  This methodology helps arm practitioners and policy decision makers with a deep understanding of the contingencies for program success, positioning them to refine program strategies and maximize their impacts.  The workshop will integrate conceptual information with practical discussions and examples, enabling attendees to assess the relevance of this approach for their work and creating opportunities for application.

You will learn:

  • The basic concepts and assertions underlying REAList Evaluation;
  • The phases of REAList Evaluation;
  • Realist Evaluation methods for clarifying program theories and sense-making; and
  • Examples in which Realist Evaluations have been applied to refine program theories, practice, and policy decisions.

Audience: This workshop is fo mixed audiences. It  may be of particular interest to those whose evaluations involve considerations of contextual influences and/or multi-site interventions.

Tina Anderson Smith is an independent consultant whose 22-year career has been focused on supporting and evaluating health system change at local, state, and national levels.  Specifically, her experience includes designing, managing, and studying the impacts of complex multisite interventions.  With a particular interest in measuring and maximizing the impact of community-level technical assistance, Ms. Smith provides facilitation, planning, research, and evaluation support to aid public and private organizations in designing more effective technical assistance programs.  

 

Workshop 26: An Introduction to Social Return on Investment (SROI) 

Offered: Tuesday, June 2, 2015 from 1:40pm-5:00pm 

Description: Social return on investment (SROI) is a new and controversial evaluation method. It is widely applied in the UK, Europe, and many international development settings. Now demand for it is growing in the US. What is SROI? It is one application of valuation—representing the value of program impacts in monetary units. Specifically, SROI compares the value of impacts to the cost of producing them. It is strongly associated with social enterprise, impact investing, social impact bonds, value-for-money initiatives, and other efforts that combine business thinking with social betterment. In this hands-on workshop, you will learn the basics of how to conduct an SROI analysis. We will approach the method with a critical eye in order to plan, use, and interpret SROI effectively. You will leave the workshop with a better understanding of how to incorporate SROI into your practice, and how to engage clients and stakeholders in its implementation.

John Gargani is the President and Founder of Gargani + Company, Inc., a program design and evaluation firm located in Berkeley, California.When he is not helping nonprofit organizations, foundations, corporations, and government agencies achieve their social missions, he is writing about evaluation, sharing his thoughts on the field at EvalBlog.com, speaking at conferences around the world, and conducting workshops to train the next generation of evaluators. Over the past 20 years, his work has taken him to diverse settings, including public housing projects, museums, countries adopting free market economies, and 19th century sailing ships. He has designed innovative social enterprises; directed large-scale randomized trials; and created novel technologies that measure how people think.