SUMMER EVALUATION INSTITUTE 2017

Workshop and Course Descriptions

Workshop 1: Introduction to Evaluation

Offered: Sunday, June 4, 9:00 a.m.- 4:00 p.m. (with lunch)

Speaker: Jan Jernigan

Level: Advanced Beginner

This workshop will provide an overview of program evaluation for Institute participants with some, but not extensive, prior background in program evaluation. The foundations of this workshop will be organized around the Centers for Disease Control and Prevention’s (CDC) six-step Framework for Program Evaluation in Public Health as well as the four sets of evaluation standards from the Joint Commission on Evaluation Standards. The six steps constitute a comprehensive approach to evaluation. While its origins are in the public health sector, the Framework approach can guide any evaluation. The workshop will place particular emphasis on the early steps, including identification and engagement of stakeholders, creation of logic models, and selecting/focusing evaluation questions. Through case studies, participants will have the opportunity to apply the content and work through some of the trade-offs and challenges inherent in program evaluation in public health and human services.

You will learn:

  • A six step framework for program evaluation
  • How to identify stakeholders, build a logic model, and select evaluation questions
  • The basics of evaluation planning

Audience: Attendees with some background in evaluation, but who desire an overview and an opportunity to examine challenges and approaches. Cases will be from public health but general enough to yield information applicable to any other setting or sector.

Jan Jernigan is a Behavioral Scientist and Senior Advisor for Evaluation in the Division of Nutrition, Physical Activity and Obesity (DNPAO), Centers for Disease Control and Prevention (CDC). She is involved in applied evaluation research in the Division and serves as a technical expert, providing training and technical assistance to funded state and local health departments and their partners in conducting evaluations of their initiatives.  Currently, she co-leads the national evaluation of a chronic disease funding opportunity announcement (FOA) that provides $75 million annually to fund all 50 states and the District of Columbia to address chronic diseases and their risk factors and serves as a senior evaluation advisor for four additional FOAs.  Dr. Jernigan leads research efforts to examine communities with declines in childhood obesity, improve physical activity and nutrition for the military as part of the DOD Healthy Base initiative, and develop new evaluation guidance for USDA funding for SNAP-Ed.

Workshop 2: A Primer on Evaluation Theories and Approaches

Offered: Sunday, June 4, 9:00 a.m.- 4:00 p.m. (with lunch)

Speaker: Bianca Montrosse-Moorhead

Level: Beginner

This workshop presents an overview to historical and contemporary theories and approaches to evaluation in interdisciplinary contexts. The primary focus is on key evaluation terminology and classifications of theories and approaches recommended by evaluation thought leaders. Workshop participants will gain insight into how their own backgrounds, training, and contexts may influence their choice of or preference for particular approaches. Incorporating small group activities, case studies, and discussions, this workshop will allow for critical reflection and active engagement with key content so that participants will leave the workshop with a solid understanding about existing theories and approaches, and their strengths, weaknesses, and opportunities for application in practice.

You will learn to:

  • Recognize different evaluation theories and approaches
  • Identify strengths, weaknesses, and opportunities associated with various evaluation theories and approaches in differing contexts
  • Apply different theories and approaches in evaluation practice

Audience: Beginning evaluators in all sectors

Bianca Montrosse-Moorhead is an assistant professor of measurement, evaluation, and assessment in the University of Connecticut’s (UConn) Neag School of Education, and program coordinator for UConn’s Graduate Certificate in Program Evaluation. As an evaluation practitioner, researcher, and educator, Bianca specializes in evaluation theory, methodology, practice, and capacity building. She co-chairs EvalYouth, a global network which aims is to build evaluator capacity throughout the world. For more than a decade, she has directed evaluations at the local, state, and national level. She was the recipient of the American Evaluation Association’s 2014 Marcia Guttentag Award.

Workshop 3: Culture and Evaluation  — Hosted by The Evaluators' Institute(TEI) 

Offered: Sunday, June 4, 9:00 a.m.- 4:00 p.m. (with lunch)

Speaker: Leona Ba

Level: All 

Participants in this 1 day course will receive TEI certification credit. This workshop rate is $480 (regularly $600). 

This course will provide participants with the opportunity to learn and apply a step-by-step approach on how to conduct culturally responsive evaluations. It will use theory-driven evaluation as a framework, because it ensures that evaluation is integrated into the design of programs.  More specifically, it will follow the three-step Culturally Responsive Theory-Driven Evaluation model proposed by Bledsoe and Donaldson (2015):

  1. Develop program impact theory
  2. Formulate and prioritize evaluation questions
  3. Answer evaluation questions

Upon registration, participants will receive a copy of the book chapter discussing this model.

Prerequisites: Understanding of evaluation and research design.

You will learn how to:

  • Understand how major cultural theories and models are relevant to evaluation
  • Explore strategies for applying cultural sensitivity to evaluation practice
  • Discuss cultural factors affecting the effectiveness of evaluation at different levels

Audience: Evaluation practitioners of all levels in all sectors who are interested in the role of culture in evaluation.

Leona Ba, EdD, is an organizational development and evaluation consultant with more than 18 years of experience in improving the performance and effectiveness of organizations, teams, and programs. Her work includes strategic planning, program design, evaluation and the development of feedback mechanisms between evaluation and organizational processes to facilitate organizational learning and adaptation. She has worked mainly in Sub-Saharan Africa, Haiti and North America designing and leading evaluations of programs of organizations such as the United States Agency for International Development (USAID), the United States State Department, United Nations agencies, the World Cocoa Foundation, Plan International, and the International Development Research Centre (IDRC). She has evaluated programs intervening in various areas including agriculture, emergency response, education, health, microenterprise development and natural resource management. Her extensive experience as an evaluation practitioner in a wide variety of cultural contexts developed her ability to integrate cultural responsiveness throughout her work.

      Workshop 5: Twelve Steps of Quantitative Data Cleaning — Strategies for Dealing with Dirty Evaluation Data

      Offered: Monday, June 5, 9:00 a.m. - 12:00 p.m.; Tuesday, June 6, 9:00 a.m. -12:00 p.m.

      Speaker: Jennifer Morrow

      Level: Intermediate

      Evaluation data, like a lot of research data, can be messy. Rarely are evaluators given data that is ready to be analyzed. Missing data, coding mistakes, and outliers are just some of the problems that evaluators should address prior to conducting analyses for their evaluation report. Even though data cleaning is an important step to data analysis, the topic has received little attention in the literature. The resources that are available in the literature tend to be complex and not always user friendly. In this workshop, you will go step-by-step through the data cleaning process and learn suggestions for what to do at each step.

      You will learn how to:

      • The recommended 12 steps are for cleaning dirty evaluation data
      • Suggestions for ways to deal with messy data at each step
      • Methods for reviewing analysis outputs and making decisions regarding data cleaning options

      Audience: Novice and experienced evaluators

      Jennifer Ann Morrow is an Associate Professor in Evaluation, Statistics and Measurement at the University of Tennessee with more than 17 years of experience teaching statistics at the undergraduate and graduate level. She is currently working on a book about the 12-steps of data cleaning.

      Workshop 6: It's Not the Plan, It's the Planning — Strategies for Evaluation Plans and Planning

      Offered: Tuesday, June 6, 1:00 p.m.- 4:00 p.m.; Wednesday, June 7, 9:00 a.m.-12:00 p.m.

      Speaker: Sheila Robinson

      Level: Beginner, Advanced Beginner

      "If you don’t know where you’re going, you’ll end up somewhere else" -Yogi Berra. Few evaluation texts explicitly address the act of evaluation planning as independent from evaluation design or evaluation reporting. This interactive workshop will introduce you to an array of evaluation activities that comprise of evaluation planning and preparing a comprehensive evaluation plan. You will leave this workshop with an understanding of how to identify stakeholders and primary intended users of evaluation, the extent to which they need to understand and be able to describe the evaluation (the program), strategies for conducting literature reviews, strategies for developing broad evaluation questions, considerations for evaluation designs, and using the Program Evaluation Standards and AEA’s Guiding Principles for Evaluators in evaluation planning. You will be introduced to a broad range of evaluation planning resources including templates, books, articles, and websites.

      You will learn how to:

      • Types of evaluation activities that comprise evaluation planning
      • Potential components of a comprehensive evaluation plan
      • Considerations for evaluation planning (i.e. client needs, collaboration, procedures, agreements, etc.)

      Audience: Evaluation practitioners with some background in evaluation basics.

      Sheila Robinson is a Program Evaluator and Instructional Mentor for Greece Central School District, and Adjunct Professor at the University of Rochester’s Warner School of Education. Her background is in special education and professional development, and she is a certified Program Evaluator. Her work for the school district centers on professional development, equity and culturally responsive education, and evaluation. At Warner School, she teaches graduate courses in Program Evaluation and Designing and Evaluating Professional Development. She is Lead Curator of AEA365 Tip-A-Day By and For Evaluators, Coordinator of the Potent Presentations (p2i) Initiative, and is a past Program Chair of AEA's PK-12 Educational Evaluation TIG.

      Workshop 7: Logic Models for Program Evaluation and Planning

      Offered: Monday, June 5, 9:00 a.m. - 12:00 p.m.; Wednesday, June 7, 9:00 a.m. -12:00 p.m.

      Speaker: Tom Chapel

      Level: Advanced Beginner

      The logic model, as a map of what a program is and intends to do, is a useful tool in both evaluation and planning and, as importantly, for integrating evaluation plans and strategic plans.  In this workshop, we will recapture the utility of program logic modeling as a simple discipline, using cases in public health and human services to explore the steps for constructing, refining, and validating models. Then, we will examine how to use these models both prospectively for planning and implementation as well as retrospectively for performance measurement and evaluation.  This workshop will illustrate the value of simple and elaborate logic models using small group case studies.

      You will learn how to:

      • Construct simple logic models
      • Use program theory principles to improve a logic model
      • Employ a model to identify and address planning and implementation issues

      Audience: Evaluation practitioners  in all sectors

      Thomas Chapel is the Chief Evaluation Officer at the Centers for Disease Control and Prevention (CDC). He serves as a central resource on strategic planning and program evaluation for CDC programs and their partners. Before joining the CDC, Chapel was vice-president of the Atlanta office of Macro International (now ICT INC) where he directed and managed projects in program evaluation, strategic planning, and evaluation design for public and nonprofit organizations. He is a frequent presenter at national meetings, a frequent contributor to edited volumes and monographs on evaluation, and has facilitated or served on numerous expert panels on public health and evaluation topics.

      Workshop 8: Introduction to Communities of Practice (CoPs)

      Offered: Tuesday, June 6, 1:00 p.m. - 4:00 p.m.; Wednesday, June 7, 9:00 a.m.-12:00 p.m.

      Speaker: Leah Neubauer; Thomas Archibald

      Level: Beginner to Intermediate

      This interactive workshop will introduce Communities of Practice (CoPs) and its application for evaluators and evaluation. CoPs are designed to engage learners in a process of knowledge constructed around common interests, ideas, passions, and goals—the things that matter to the people in the group. Through identifying the three core CoP elements (domain, community and practice), members work to generate a shared repertoire of knowledge and resources. CoPs can be found in many arenas: corporations, schools, non-profit settings, within evaluation designs, and in local AEA affiliate practice. This workshop will explore CoP development and implementation for a group of evaluators focused on understanding experience, increasing knowledge, and ultimately, improving evaluation practice. Session facilitators will also highlight examples from the fields of evaluation, public health and adult education and involve participants in a series of hands-on inquiry-oriented techniques.

      You will learn how to:

      • Key theories and models guiding Communities of Practice
      • The ten essential fundamentals of developing and sustaining a Community of Practice
      • CoP methodologies including: storytelling, arts-based, collaborative inquiry, evaluative thinking, and critical self-reflection

      Audience: Evaluation practitioners interested in Communities of Practice

      Dr. Leah Christina Neubauer has been working in the field of public health as an educator, evaluator, and researcher for the last fifteen years.  Her research focuses on health education and promotion, global health & health disparities. She leads and collaborates on projects that employ mixed-method approaches to develop, implement, evaluate & disseminate translational and culturally responsive research and evaluation. Leah has collaborated with many global (Kenya-based), national, state and local partners on a variety of endeavors.  She has delivered numerous presentations and co-authored publications on global public health and community-based evaluation, training and research. Her dissertation, The Critically Reflective Evaluator, identified essential qualities and characteristics of evaluator-formed CoPs.  She is currently the Past-President of the AEA Affiliate – the Chicagoland Evaluation Association and co-chair of the AEA Local Affiliate Collaborative (LAC). She is an Assistant Professor of Preventive Medicine at Northwestern University. She received her EdD in Adult and Continuing Education in 2013 from National Louis University.

      Dr. Thomas Archibald is an Assistant Professor and Extension Specialist in the Department of Agricultural, Leadership, and Community Education at Virginia Tech. His research and practice focus on program evaluation, evaluation capacity building (especially regarding the emergent notion of “evaluative thinking”), and research-practice integration, focusing specifically on contexts of Cooperative Extension and community education. He has facilitated numerous capacity building workshops around the United States and in sub-Saharan Africa. Archibald is a recipient of the 2013 Michael Scriven Dissertation Award for Outstanding Contribution to Evaluation Theory, Method, or Practice for his dissertation on the politics of evidence in the “evidence-based” education movement. He is a Board Member of the Eastern Evaluation Research Society and a Program Co-Chair of the AEA Organizational Learning and Evaluation Capacity Building Topical Interest Group. He received his PhD in Adult and Extension Education in 2013 from Cornell University, where he was a graduate research assistant in the Cornell Office for Research on Evaluation under the direction of Bill Trochim.

      Workshop 9: Advanced Cost-Effectiveness Analysis for Health and Human Service Programs

      Offered: Tuesday, June 6, 9:00 a.m. - 12:00 p.m.; Wednesday, June 7, 9:00 a.m. -12:00 p.m.

      Speaker: Edward Broughton

      Level: Advanced

      The relentless pursuit of health and human service programs to be more cost-effective and affordable demands robust economic analyses so policymakers can make decisions based on the best available evidence. This workshop builds on basic knowledge and skills in program evaluation to help you understand the workings of real world economic evaluation models based on a simple, intuitive framework to calculate the cost-effectiveness health and social service intervention. You will learn how to develop these models and interpret their outputs, what sensitivity analysis is, what a Markov model looks like and how to use probability modeling. By the end of the workshop, you will be able to conduct your own basic cost-effectiveness analysis and interpret and communicate its results accurately and effectively. You will also understand more complex economic analyses of health and human service programs and possess the tools and framework upon which you can develop further skills in this area. The presentation will be a highly interactive and full  participation is expected.

      You will learn how to:

      • Create a Cost-effectiveness analysis relevant to the work you are involved in
      • Develop an economic model that uses real data for practical results
      • Interpret cost-effectiveness acceptability curves
      • What other models are used in economic analysis and how they work
      • Effectively interpret and communicate the results of a CEA

      Audience: Experienced practitioners working in the areas of health and human services.

      Edward Broughton is Director of Research and Evaluation of the USAID ASSIST Project. He previously served as adjunct faculty at Mailman School of Public Health at Columbia University and associate of Johns Hopkins School of Public Health, teaching about economic analyses, health economics, research methods for health policy, and management and decision analysis.

      Workshop 10: Nonparametric Statistics — What to Do When Your Data Breaks the Rules

      Offered: Monday, June 5, 9:00 a.m. - 12:00 p.m.; Tuesday, June 6, 9:00 a.m. -12:00 p.m.

      Speaker: Jennifer Catrambone

      Level: Beginner

      This session walks participants through nonparametric statistics, techniques designed to be used on small, uneven, or skewed samples. Participants will leave with a stand-alone handout that clearly identifies situations in which nonparametric statistics should be used, explains when and why they are appropriate, illustrates how to run the techniques in SPSS (including annotated screen shots), how to interpret the output, and how to write up the results. All levels are welcome.

      You will learn:

      • Basics of nonparametric stats and how they differ from parametric stats
      • When nonparametric stats are appropriate and why, including how to explain this to others in print or in person
      • Types of nonparametric tests that exist
      • To evaluate their own stats situation and apply the best-suited nonparametric test
      • To run the techniques in SPSS (including annotated screen shots)
      • To interpret the output
      • To write up the results for publication or presentation

      Audience: Evaluation practitioners in all sectors.

      Jennifer Catrambone is a long-time stats nerd who has been teaching folks about Nonparametric Statistics for over a decade.  She works full time as Director of Evaluation and Quality Improvement at the Ruth M Rothstein CORE Center in Chicago.  Her journal publications focus primarily on the evaluation of services for people living with HIV/AIDS, and for survivors of domestic violence, and sexual assault.  She has served as a professor for undergraduate and graduate level statistics courses for both the University of Illinois at Chicago and the Chicago School of Professional Psychology.  She is currently writing a statistics book for Sage Publications. 

      Workshop 11: Focus Groups for Qualitative Topics

      Offered: Monday, June 5, 9:00 a.m. - 12:00 p.m.; Tuesday, June 6, 9:00 a.m. -12:00 p.m.

      Speaker: Michelle Revels

      Level: Beginner to Intermediate

      As a qualitative research method, focus groups are an important tool to help researchers understand the motivators and determinants of a given behavior. This course, based on the seminal work of Richard Krueger and David Morgan, provides a practical introduction to focus group research. 

      You will learn how to:

      • Identify and discuss critical decisions in designing a focus group study
      • Understand how research or study questions influence decisions regarding segmentation, recruitment, and screening
      • Identify and discuss different types of analytical strategies and focus group reports

      Audience: Evaluation practitioners  in all sectors.

      Michelle Revels is a technical director at ICF Macro specializing in focus group research and program evaluation. She has taught focus group research methods at both the AEA Annual Conference and Summer Evaluation Institute for multiple years. Revels attended Hampshire College in Amherst, Massachusetts and the Hubert H. Humphrey Institute of Public Affairs at the University of Minnesota.

      Workshop 12: Identifying Evaluation Questions

      Offered: Monday, June 5th, 1:00 p.m. - 4:00 p.m.; Wednesday, June 7, 9:00 a.m.-12:00 p.m.

      Speaker: Lori Wingate

      Level: Beginner

      Well-crafted evaluation questions are an efficient and powerful means for clarifying and communicating the focus of an evaluation, yet there is little formal guidance on this aspect of evaluation practice. Participants in this workshop will learn how to develop sound evaluation questions that can serve as a foundation for subsequent decisions about evaluation design, data collection, and data interpretation. The workshop includes hands-on exercises, tools, and resource materials to facilitate participants’ application of the workshop content in their practice. This workshop is designed for beginners, but will be beneficial for more experienced evaluators whose academic preparation did not include training on developing evaluation questions.

      You will learn how to:

      • Identify and utilize multiple information sources to inform the development of evaluation questions
      • Create a criteria for evaluation questions and how to apply them in developing or selecting questions to guide an evaluation
      • Align data collection and data interpretation with evaluation questions to ensure a useful and meaningful evaluation

      Audience: Individuals responsible for planning, conducting, or commissioning evaluations. 

      Lori Wingate is the Director of Research at The Evaluation Center at Western Michigan University (WMU). She has a Ph.D. in interdisciplinary evaluation from WMU and 25 years of experience in the field of program evaluation. She directs EvaluATE, the evaluation support center for the National Science Foundation’s Advanced Technological Education (ATE) program and has led a range of evaluation projects at WMU focused on STEM education, public health, and higher education initiatives. Since 2011, she has provided technical consultation as a subject matter expert in evaluation to the U.S. Centers for Disease Control and Prevention. Dr. Wingate has led more than 50 webinars and workshops on evaluation in a variety of contexts and is an associate member of the graduate faculty at WMU.  

      Workshop 13: Evaluating Organizational Collaboration and Networks

      Offered: Tuesday, June 6, 1:00 p.m. - 4:00 p.m.; Wednesday, June 7, 9:00 a.m.-12:00 p.m.

      Speaker: Rebecca Woodland

      Level: Intermediate

      “Collaboration” is a ubiquitous, yet misunderstood, under-empiricized and un-operationalized construct. Program leaders and organizational stakeholders looking to do collaboration and build networks struggle to identify, practice and evaluate it with efficacy. In this workshop, we will explore how the principles of collaboration theory can be used to plan, evaluate, and improve collaboration in the context of organizations/programs, partnerships, and networks.

      Together, we will practice strategies for assessing levels of integration, stages of development, cycles of inquiry, structures of networks, and specific tools for data collection, analysis and reporting. You will have the opportunity to increase your capacity to quantitatively, qualitatively, and visually examine the development of inter- and intra-organizational collaboration. We will practice using techniques that are currently being used in the evaluation of organizational collaboration in a range of settings and sectors, including NSF sponsored Computer Science for All Initiatives, PK-16 educational reform and the development of professional learning communities, grant and contract sponsored endeavors such as the Massachusetts’ Board of Registered Nurses Patient Safety Initiative, CDC sponsored activities of the Association for State and Territorial Dental Directors (ASTDD), the federal Animal Plant and Health Inspection Service (APHIS), and EPsCoR/NSF grant-sponsored inter-jurisdictional research programs.

      You will learn to:

      • Recognize five fundamental stages of evaluation of organizational collaboration embodied in the Collaboration Evaluation and Improvement Framework (Woodland & Hutton, 2012)
      • Operationalize “collaboration” so as to understand and be able to evaluate the construct
      • Use field-tested and validated strategies, tools and protocols used in the qualitative, quantitative, and visual assessment of collaboration
      • See how social network analysis can be utilized to inventory, visualize, and mathematically describe the development of organizational collaboration and networks over time
      • Understand that an increase in partnerships or “more” is not necessarily better
      • Recognize, energize, and re-organize patterns of interaction between people and organizations so as to more effectively address complex societal issues

      Audience: Evaluation practitioners in all sectors

      Rebecca Woodland is Chair of the Department of Educational Policy, Research, and Administration at the University of Massachusetts Amherst and has facilitated workshops and courses for adult learners for more than 15 years. She is an AEA “Dynamic Dozen” honoree, recognized as one of Association’s most effective presenters over the past 10 years. Dr. Woodland is a member of the American Journal of Evaluation’s editorial board. Her recent publications include: Evaluation of a Cross-Cultural Training Program for Pakistani Educators in Evaluation and Program Planning (2017); Evaluating PreK-12 Professional Learning Communities: An Improvement Science Perspective. American Journal of Evaluation (2012); and Evaluating Organizational Collaborations: Suggested Entry Points and Strategies in the American Journal of Evaluation (2016). Rebecca notes, “I love creating opportunities in which all participants experience meaningful learning, find the material useful and relevant, and we have fun at the same time.”

      Workshop 14: Crafting Quality Questions — The Art & Science of Survey Design

      Offered: Tuesday, June 6, 9:00 a.m. - 12:00 p.m.

      Speaker: Sheila Robinson

      Level: Beginner

      Surveys are a popular data collection tool for their ease of use and the promise of reaching large populations with a potentially small investment of time and technical resources. As survey fatigue grows, evaluators must be even more judicious in using surveys and craft richer, more concise, and more targeted questions to yield meaningful data. Successful survey research also requires an understanding of the cognitive processes that respondents employ in answering questions with accuracy and candor. Using rich examples and an interactive approach, the facilitator will demonstrate why survey researchers must engage in a rigorous, intentional survey design process in order to craft high quality questions -- arguably the most critical element of any survey.

      Participants in this session will work through the survey design process through a series of activities, developing an understanding of the cognitive aspects of survey response and question design. Participants will increase their ability to craft high quality survey questions, and leave with resources to further develop their skills, including a copy of a draft checklist for crafting quality questions, soon to be published in a textbook.

      You will learn how to:

      • Craft high quality survey questions
      • Effectively structure and design surveys for high quality response data

      Audience: Evaluation practitioners  in all sectors.

      Sheila Robinson is a Program Evaluator and Instructional Mentor for Greece Central School District, and Adjunct Professor at the University of Rochester’s Warner School of Education. Her background is in special education and professional development, and she is a certified Program Evaluator. Her work for the school district centers on professional development, equity and culturally responsive education, and evaluation. At Warner School, she teaches graduate courses in Program Evaluation and Designing and Evaluating Professional Development. She is Lead Curator of AEA365 Tip-A-Day By and For Evaluators, Coordinator of the Potent Presentations (p2i) Initiative, and is a past Program Chair of AEA's PK-12 Educational Evaluation TIG.

      Workshop 15: A Participatory Method for Engaging Stakeholders with Evaluation Findings

      Offered: Monday, June 5th, 1:00 p.m. - 4:00 p.m.; Tuesday, June 6, 9:00 a.m. - 12:00 pm

      Speaker: Adrienne E. Adams

      Level: All

      In this workshop, learn how to facilitate the “Expectations to Change (E2C)” process, a six-step, interactive, workshop-based method for guiding evaluation stakeholders from establishing performance standards (i.e., “expectations”) to formulating action steps toward desired programmatic change. The E2C process is designed to engage stakeholders with their evaluation findings as a means of promoting evaluation use and building evaluation capacity. The distinguishing feature of this process is that it is uniquely suited for contexts in which the aim is to assess performance on a set of indicators by comparing actual performance to planned performance standards for the purpose of program improvement. In the E2C process, stakeholders are guided through establishing standards, comparing the actual results to those standards to identify areas for improvement, and then generating recommendation and concrete action steps to implement desired changes. At its core, E2C is a process of self-evaluation and the role of the evaluator is that of facilitator, teacher, and technical consultant.

      You will learn how to:

      • Establish performance standards
      • Compare evaluation findings to established standards to identify areas for improvement
      • Generate recommendations targeting identified areas for improvement
      • Formulate concrete action steps for implementing recommendations

      Audience: Evaluation practitioners and consumers with a basic knowledge of evaluation concepts.

      Adrienne Adams is an Associate Professor of Psychology at Michigan State University. She holds a PhD in community psychology. She has evaluated local, state, and national domestic violence and sexual assault victim service programs, including the Department of Defense Domestic Abuse Victim Advocacy Pilot Program. Adrienne also serves as the Director of Evaluation for a large, urban non-profit organization that offers a wide array of supportive programs for victims of sexual assault and domestic violence. She uses participatory evaluation methods to build evaluation capacity and foster organizational learning. She is a board member of the Michigan Association for Evaluation and a member of the American Evaluation Association, and has published in the American Journal of Evaluation.

      Workshop 16: Consulting with Communities and Non-Profits

      Offered: Monday, June 5, 9:00 a.m. - 12:00 p.m.; Tuesday, June 6, 9:00 a.m. -12:00 p.m.

      Speaker: Susan Wolfe; Anne Price

      Level: All

      This skill-development workshop is designed for evaluators who engage in community-based work and those who consult with nonprofit organizations (including government, school districts, and medical institutions). Through lecture, discussion, and exercises, this hands on, interactive skills development workshop will provide the foundations for practice in effective community engagement. Workshop presenters will review topics such as the types of personal qualities and professional competencies needed to be effective in community practice. Participants will complete a personal competencies inventory that will provide insights regarding their own professional developmental needs. Through case studies, attendees will identify ways to work successfully with community-based organizations to facilitate collaboration while navigating resource limitations and community and organizational politics.

      You will learn:

      • What community consulting is and how it differs from other types of consulting
      • Personal characteristics and professional competencies needed for success and effectiveness
      • To identify your specific areas of strengths and weaknesses related to personal and professional competencies
      • How to help facilitate collaboration within community based organizations and community members
      • To identify community and organizational barriers to effective community collaboration and ways to deal with these barriers as a community consultant

      Audience: Evaluators who will be working with community-based organizations in or independent consultants.

      Susan M. Wolfe is a community psychologist with over 30 years of experience conducting evaluations and working in communities. She has held jobs with large community-hospital districts; a community college district; a large K-12 school district; universities; a children’s mental health clinic; the federal government; and as a consultant, both independent and for non-profit organizations. She has worked on research and evaluation projects, longitudinal and cross-sectional, spanning numerous content areas that include health disparities (cancer, maternal child health); nursing homes; homelessness; K-12 through college education; domestic violence and sexual assault; teen pregnancy and adolescent development; mental health and health services. Dr. Wolfe has published a variety of venues and presented at numerous national and international meetings. She is the co-editor (with Victoria Scott) of Community Psychology: Foundations for Practice which was published by Sage Publications in 2015 and co-author of A Guidebook for Community Consultants (with Ann Price) to be published by Oxford University Press. Her awards and accolades include the Society for Community Research and Action’s (SCRA) Distinguished Contributions to Community Psychology Practice and John Kalafat Applied Community Psychology awards; the U.S. Dept. of Health and Human Service Inspector General’s Award for Excellence in Program Evaluation and three Inspector General’s Exceptional Achievement Awards. She currently chairs the AEA Nonprofits and Foundations Topical Interest Group. She earned a PhD in Human Development from the University of Texas at Dallas. She is currently a Senior Consultant with CNM Connect in Dallas, TX where she provides evaluation and capacity building services to nonprofit organizations.

      Ann Webb Price is the founder and President of Community Evaluation Solutions, a social science evaluation firm based in the metro-Atlanta area. Dr. Price is a community psychologist with almost 20 years of experience designing, implementing, and evaluating community-based organizations, foundations and state and federally funded prevention programs. Prior to working in evaluation, Dr. Price worked in substance abuse addiction treatment and prevention. She conducts evaluations in many areas including education and dropout prevention, substance abuse prevention, youth development, foster care advocacy, child care, community coalitions, and public health. She has evaluated several community coalitions including the Drug Free Coalition of Hall County, the Drug Free coalition of Columbia County, the Michigan Oral Health Coalition, and the Columbia County Family Connections among others. Prior to CES, Dr. Price worked as a Senior Data Analyst at ICF Macro on a national multi-site longitudinal study of the Comprehensive Community Mental Health Services for Children and Their Families (CMHS). Dr. Price earned her Doctorate in Community Psychology from Georgia State University and an M.A. in Clinical Psychology from the University of West Florida. Ann is an active member of the American Evaluation Association and its Community Psychology Topical Interest Group.

      Workshop 17: Evaluating Coalitions and Collaboratives

      Offered: Monday, June 5, 1:00 p.m. - 4:00 p.m.; Tuesday, June 6, 1:00 p.m. - 4:00 p.m.

      Speaker: Susan Wolfe; Anne Price

      Level: All

      This skill-development workshop is designed for evaluators who evaluate coalitions and community collaboratives. Through lecture, discussion, and exercises, this hands-on, interactive skills development workshop will provide the foundations and tools needed to conduct evaluations of coalitions. Workshop presenters will review topics such as frameworks for evaluating coalitions; measures and tools; and challenges. Attendees will participate in a coalition simulation that will highlight some of the challenges and issues. The presenters will share their experiences and provide case studies as a basis for discussion to apply the material to the types of situations and settings attendees encounter.

      You will learn:

      • Theoretical and methodological frameworks that can be useful to analyze and evaluate coalitions
      • Measures and tools available for coalition evaluation
      • Challenges to evaluating coalitions and how can they be overcome
      • Best practices that can be applied to ensure success

      Audience:  Evaluators who will be working with coalitions and other community-based collaboratives

      Susan M. Wolfe is a community psychologist with over 30 years of experience conducting evaluations and working in communities. She has held jobs with large community-hospital districts; a community college district; a large K-12 school district; universities; a children’s mental health clinic; the federal government; and as a consultant, both independent and for non-profit organizations. She has worked on research and evaluation projects, longitudinal and cross-sectional, spanning numerous content areas that include health disparities (cancer, maternal child health); nursing homes; homelessness; K-12 through college education; domestic violence and sexual assault; teen pregnancy and adolescent development; mental health and health services. Dr. Wolfe has published a variety of venues and presented at numerous national and international meetings. She is the co-editor (with Victoria Scott) of Community Psychology: Foundations for Practice which was published by Sage Publications in 2015 and co-author of A Guidebook for Community Consultants (with Ann Price) to be published by Oxford University Press. Her awards and accolades include the Society for Community Research and Action’s (SCRA) Distinguished Contributions to Community Psychology Practice and John Kalafat Applied Community Psychology awards; the U.S. Dept. of Health and Human Service Inspector General’s Award for Excellence in Program Evaluation and three Inspector General’s Exceptional Achievement Awards. She currently chairs the AEA Nonprofits and Foundations Topical Interest Group. She earned a Ph.D. in Human Development from the University of Texas at Dallas. She is currently a Senior Consultant with CNM Connect in Dallas, TX where she provides evaluation and capacity building services to nonprofit organizations.

      Ann Webb Price is the founder and President of Community Evaluation Solutions, a social science evaluation firm based in the metro-Atlanta area. Dr. Price is a community psychologist with almost 20 years of experience designing, implementing, and evaluating community-based organizations, foundations and state and federally funded prevention programs. Prior to working in evaluation, Dr. Price worked in substance abuse addiction treatment and prevention. She conducts evaluations in many areas including education and dropout prevention, substance abuse prevention, youth development, foster care advocacy, child care, community coalitions, and public health. She has evaluated several community coalitions including the Drug Free Coalition of Hall County, the Drug Free coalition of Columbia County, the Columbia County Drug Free Coalition and three Georgia Alcohol Prevention Projects. Ann is also a consultant with Georgia Family Connection Partnership. Prior to CES, Dr. Price worked as a Senior Data Analyst at ICF Macro on a national multi-site longitudinal study of the Comprehensive Community Mental Health Services for Children and Their Families (CMHS). Dr. Price earned her Doctorate in Community Psychology from Georgia State University and an M.A. in Clinical Psychology from the University of West Florida. Ann is an active member of the American Evaluation Association (AEA) and the Community Psychology Topical Interest Group. 

      Workshop 18: Data Visualization 

      Offered: Tuesday, June 6, 9:00 a.m. - 12:00 p.m.; Wednesday, June 7, 9:00 a.m. - 12:00 p.m.

      Speaker: Susan Kistler

      Level: Beginner

      In this interactive workshop, learn how to display data in ways that increase understanding, garner attention, and further the purpose of your work. Participants will draw on research in cognition, theories of perception, and principles of graphic design. Through exercises, critical review, and discussion of multiple examples, you will learn how to make crucial design decisions and to justify those decisions based on multiple sources of evidence. Leave with samples, checklists, reading suggestions, and even a bit of chocolate.

      You will learn how to:

      • Choose from among multiple types of visualizations
      • Accentuate the key 'take home' message in your data
      • Structure a visualization to increase comprehension
      • Leverage data visualizations as tools in multiple contexts

      Audience: Evaluation practitioners in all sectors.

      Susan Kistler is an independent consultant and former Executive Director of the American Evaluation Association (AEA). Susan has worked with groups in nonprofits, education, government, and business to improve their data collection and use. She brings twenty years of experience as a teacher and trainer to her workshops and was identified as a top facilitator as part of the American Evaluation Association's Potent Presentations Initiative (p2i).

      Workshop 19: Low-Cost / No-Cost Tech Tools for Evaluators

      Offered: Monday, June 5, 1:00 p.m. - 4:00 p.m.; Tuesday, June 6, 1:00 p.m. - 4:00 p.m.

      Speaker: Susan Kistler

      Level: All

      This fun, fast-paced workshop consists of demonstrations of multiple technology tools used by evaluators. It is not meant to be a comprehensive exploration, but rather an overview of a range of tools that meet the real-world needs of working professionals. Each tool will be demonstrated with enough information for you to decide if it is worth further independent exploration and you will leave with access to learn more about each one. The workshop will end with a call to the audience to contribute their favorite tools, so come ready to learn and ready to share. The tools explored in this session are not only appropriate for evaluators, but also for students, researchers, entrepreneurs, and consultants.

      You will learn how to:

      • Manage online resources
      • Gather and analyze data
      • Engage stakeholders in the evaluation process
      • Take your reporting in new directions

      Audience: Evaluation practitioners  in all sectors.

      Susan Kistler is an independent consultant and former Executive Director of the American Evaluation Association (AEA). Susan has worked with groups in non-profits, education, government, and business to improve their data collection and use. She brings 20 years of experience as a teacher and trainer to her workshops and was identified as a top facilitator as part of the American Evaluation Association's Potent Presentations Initiative (P2i).

      Workshop 20: Popping the Question — Developing Quality Survey Items

      Offered: Monday, June 5th, 9:00 a.m. - 12:00 p.m.

      Speaker: Susan Kistler

      Level: Beginner

      Developing reliable and valid surveys that gather usable, informative, data requires the selection of just the right questions. We will explore (a) when to use open- versus close-ended questions, including issues of feasibility of analysis; (b) the range of question types, such as yes/no, multiple choice, scales, ranking, short answer, and factors that influence which type to use; (c) question ordering and its impact on response; (d) careful wording to avoid common question development pitfalls; and (e) how the mode of delivery (online, pencil-and-paper, telephone) influences question wording. Attendees will leave with a range of example survey questions, a guide to assist with the selection of question types, and a set of resources for further investigation of surveys and question-wording. This workshop is fast-paced and hands-on, including critical analysis of multiple real-world examples of survey questions.

      You will learn how to:

      • How to align question type and wording to the evaluand
      • Ways to increase the likelihood of gathering actionable information
      • Approaches to improving the reliability and validity of survey responses
      • Question structures that decrease analysis time while maintaining data quality

      Audience: Evaluation practitioners in all sectors.

      Susan Kistler is an independent consultant and former Executive Director of the American Evaluation Association (AEA). Susan has worked with groups in non-profits, education, government, and business to improve their data collection and use. She brings 20 years of experience as a teacher and trainer to her workshops and was identified as a top facilitator as part of the American Evaluation Association's Potent Presentations Initiative (P2i).

      Workshop 21: An Executive Summary is not Enough — Effective Evaluation Reporting Techniques

      Offered: Monday, June 5, 9:00 a.m. - 12:00 p.m.; Monday, June 5, 1:00 p.m. - 4:00 p.m.

      Speaker: Kylie Hutchinson

      Level: Beginner

      As an evaluator you are conscientious about conducting the best evaluation possible, but how much thought do you give to communicating your results effectively? Do you consider your job complete after submitting a final report? Reporting is an important skill for evaluators who care about seeing their results disseminated widely and recommendations actually implemented, but there are alternatives to the traditional lengthy report. This interactive workshop will present an overview of four key principles for effective reporting and engage participants in a discussion of its role in effective evaluation. Participants will leave with an expanded repertoire of innovative reporting techniques.

      You will learn how to:

      • The role of effective reporting evaluation practice
      • Four key principles for communicating results effectively
      • Three alternative techniques for communicating your results

      Audience: Evaluation practitioners in all sectors.

      Kylie Hutchinson is the principal of Community Solutions Planning & Evaluation, a small consulting firm specializing in evaluation, program planning, and program sustainability. She has thirty years’ experience assisting organizations of all shapes and sizes across North America to build more efficient, effective, and sustainable programs. Kylie is a regular presenter on evaluation topics for the American Evaluation Association (AEA), Summer Evaluation Institute, Canadian Evaluation Society, and the U.S. Center of Disease Control. Her primary passion is taking evaluation theory and producing engaging and useful resources for evaluators and non-profits. She is the author of Survive and Thrive: Three Steps for Securing Your Program’s Sustainability. 

      Workshop 22: Introduction to Infographics and Strategies for Use in Evaluation

      Offered: Tuesday, June 6, 1:00 p.m. - 4:00 p.m.; Wednesday, June 7, 9:00 a.m.-12:00 p.m.

      Speaker: Stephanie Baird-Wilkerson

      Level: Beginner

      Do you want to learn how to use infographics to communicate evaluation findings in an effective and engaging way? This workshop introduces infographic basics, best practices, and practical tips for using low-cost tools to produce well-designed infographics for evaluation. No experience with graphic design or infographics required. Participants will learn about the purpose, features, and use of infographics in evaluation by examining a variety of different infographic styles and types. The workshop will be interactive and participants will have the opportunity to share ideas, ask questions, and view demonstrations of how to develop an infographic. Attendees will receive a handout with resources for future use.

      You will learn how to:

      • The purpose of infographics
      • Best practices for using infographics in evaluation
      • How to use various tools to create an infographic

      Audience: Evaluators who want to promote use of evaluation findings for diverse stakeholder groups.

      Stephanie Wilkerson President of Magnolia Consulting, brings over 20 years of experience in conducting rigorous and practical research and evaluation studies in preK-20 education settings. She specializes in quantitative and qualitative research design and analysis, program evaluation, implementation fidelity and assessment, and instrument construction, and offers technical assistance in evaluation capacity building, infographics, data use, logic model development, and innovation configuration mapping. Her studies respond to the information needs and priorities of a variety of evaluation stakeholders including policymakers, program funders and developers, school and district administrators, teachers, community members, and students. To promote evaluation use among stakeholder audiences she incorporates infographics as a communication and knowledge utilization tool for sample recruitment, study orientations, and evaluation reporting. In her experience, she has found that the use of infographics helps facilitate meaningful dialogue among evaluation stakeholders about the implications of evaluation findings in informing policy and practice.

      Workshop 23: Mixed Methods Design in Evaluation

      Offered: Tuesday, June 6, 1:00 p.m. - 4:00 p.m.; Wednesday, June 7, 9:00 a.m. - 12:00 p.m.

      Speaker: Donna Mertens

      Level: Intermediate

      Developments in the use of mixed methods have extended beyond the practice of combining surveys and focus groups. The sophistication of mixed methods designs in evaluation will be explained and demonstrated through illustrative examples taken from diverse sectors and geographical regions. Mixed methods designs will include major types of evaluation: effectiveness of interventions, instrument development, policy evaluation, and systematic reviews. The designs will be tied to the major branches in evaluation as defined by Alkin (2013) and Mertens and Wilson (2012). Participants will have the opportunity to create mixed methods designs using evaluation vignettes for each type of evaluation.

      You will learn how to:

      • Identify the components of mixed methods designs in evaluation for the purpose of determining intervention effectiveness, instrument development, policy evaluation, and systematic review
      • Apply the concepts of mixed methods design for a specific context using a case study.

      Audience: Evaluation practitioners who are interested or engaged in mixed methods evaluation.

      Donna M. Mertens is Professor Emeritus, Gallaudet University, and past president and board member of the American Evaluation Association (AEA). She was the editor for the Journal of Mixed Methods Research and a founding board member of the Mixed Methods International Research Association and the International Organization for Cooperation in Evaluation. She has conducted evaluations and training in transformative mixed methods in 70 countries. Her specialization is bringing a lens of social justice and human rights to the evaluation of programs for members of marginalized communities.

      Workshop 24: Introduction to Social Impact Measurement and Evaluation

      Speaker: Veronica Olazabal and Karim Harji

      Offered: Monday, June 5th, 9:00 am- 12:00 pm; Tuesday, June 6, 9:00 am- 12:00 pm

      Level: Beginner to Advanced Beginner

      The social sector is changing and moving towards right-sized, fit for purpose and leaner approaches to assessing results, an area broadly referred to as social impact measurement (SIM). In addition to monitoring and evaluating outcomes and impacts, today’s M&E professionals need to be equipped with the tools and resources needed to navigate across this new landscape. This workshop will demystify social impact measurement (SIM), situate the context and implications these trends have for evaluation, draw the connection between SIM and evaluation, and provide a menu of tools and resources available for M&E practitioners working in this new environment. 

      You will learn:

      • The history of social impact measurement and implications for M&E
      • Practical tools, tips and techniques important for diagnosing and designing right-sized measurement and evaluation
      • A menu of approaches being used to measure and assess impact

      Audience: Attendees charged with measuring, monitoring and evaluating impact from the private sector, impact investors, social enterprises and other market based-solutions as well as those interested in learning more about the growing social impact measurement space.

      Veronica Olazabal is Director of Measurement, Evaluation and Organizational Performance at The Rockefeller Foundation. Her professional portfolio spans two decades and four continents and includes working with the MasterCard and Rockefeller Foundation’s global programs. She has also led the monitoring, evaluation, and research efforts of social venture Nuru International, UMCOR (The United Methodists Committee on Relief), and the Food Bank for New York City. Veronica is the recipient of the 2014 American Evaluation Association’s (AEA) Alva and Gunnar Myrdal Evaluation Practice Award, currently serves on the AEA Board and is co-Founder of the AEA Social Impact Measurement (SIM) Topical Interest Group (TIG) as well as co-chairs the AEA International Cross-Cultural Evaluation (ICCE) TIG.

      Karim Harji is Co-Founder and Director at Purpose Capital, an impact investment advisory firm.  He supports impact investors and grantmakers in designing, implementing, and evaluating impact investment strategies and portfolios. He co-authored the strategic assessment of the Rockefeller Foundation’s Impact Investing Initiative, and the global landscape review, Accelerating Impact: Achievements, Challenges and What’s Next in Building the Impact Investing Industry.  He was previously an Advisor to the Rockefeller Foundation on social impact measurement, and a Member of the Impact Measurement Working Group of the G8 Social Impact Investment Task Force.  He is the co-Founder of the AEA Social Impact Measurement (SIM) Topical Interest Group (TIG).

      Workshop 25: Introduction to Big Data, Data Analytics and Evaluation

      Offered: Tuesday, June 6, 9:00 a.m. - 12:00 p.m.; Wednesday, June 7, 9:00 a.m. - 12:00 p.m.

      Speaker: Pete York

      Offered: Monday, June 5th, 9:00 am- 12:00 pm; Tuesday, June 6, 9:00 am- 12:00 pm

       Level: Beginner to Intermediate 

      Evaluators are known for their ability to generate retrospective data and evidence about a project or program.  Within the evaluation field, formative and summative assessments are abundant. However, new forms of tech-enabled data collection -  including big data -  may allow evaluators new capabilities. The purpose of this workshop is to provide an introduction to the tools and techniques for the collection and analysis of big data, and to use case studies to illustrate the multiple ways in which big data and data analytics can strengthen program evaluation.

      You will learn:

      • Tools and techniques for collecting and analyzing big data

      Audience: Evaluation practitioners in all sectors interested in the role of data analytics in evaluation.

      Pete York, MSSA has over 20 years of experience as a consultant and researcher in the evaluation and nonprofit fields, as well as a national spokesperson for social impact and impact measurement issues. He has designed and led numerous research and evaluation studies with private philanthropies, corporations, nonprofit organizations and government agencies; examples include: the California Department of Corrections and Rehabilitation, the Florida Department of Juvenile Justice, Grantmakers for Effective Organizations, Gap, Inc., the Philadelphia Zoo, the David & Lucille Packard Foundation, Atlantic Philanthropies, the California Endowment, the Center for Employment Opportunities, Camp Fire USA, YMCA of the USA, etc.  He has authored book chapters, academic and professional articles, and a book on the topic of evaluation for philanthropists – “Funder's Guide to Evaluation: Leveraging Evaluation to Improve Nonprofit Effectiveness”. He has spent the last five years developing analytic techniques that leverage machine learning algorithms and big data to create predictive and prescriptive models and tools for social change agents. He recently co-authored a book chapter on this work – “The Application of Predictive Analytics and Machine Learning to Risk Assessment in Juvenile Justice: The Florida Experience.”

      Workshop 27: Training Evaluation Framework and Tools (TEFT)

      Offered: Tuesday, June 6, 1:00 p.m. - 4:00 p.m.; Wednesday, June 7, 9:00 a.m. - 12:00 p.m.

      Speakers: Gabrielle O’Malley, Anja Minnick, Michelle Leander-Griffith

      Level: Advanced Beginner

      This workshop will provide an overview of outcome evaluation for training using the Training Evaluation Framework and Tools (TEFT, http://www.go2itech.org/resources/TEFT). TEFT is a set of resources designed to help evaluators, implementers, and program managers at all levels plan successful evaluations of program outcomes.  The resources are organized as six steps to guide the planning of a training outcome evaluation. The framework helps stakeholders articulate the conceptual link between trainings for health care workers and meaningful outcomes at the individual, facility, and population levels. The tools guide stakeholders to anticipate the situational factors that might affect an evaluation, identify the appropriate level of outcome to evaluate, and differentiate which training interventions merit an outcome evaluation. While the framework and tools were developed for training for health care workers, they are applicable more broadly. Participants will use a case study to work through the steps of the evaluation approach.

      You will learn the following 6 steps guiding you in a training outcome evaluation plan:

      • Identify anticipated outcomes
      • Address situational factors
      • Refine the scope of the evaluation
      • Define evaluation questions, objectives and indicators
      • Choose an evaluation design and method
      • Plan the evaluation

      Audience: Attendees with some background in evaluation, but this workshop is geared specifically at those want to move beyond training output data to related outcomes.

      Dr. Gabrielle O’Malley is the University of Washington’s International Training and Education Center for Health (I-TECH) Director of Implementation Science. She has worked as an applied research and evaluation professional for over 25 years. Her experience includes a wide variety of international and domestic programs including child survival, private agricultural enterprise, medical education, community technology, reproductive health, HIV prevention (PrEP), care and treatment, as well as applied research for private industry.

      Michelle Leander-Griffith is an Evaluation Fellow in the Division of Laboratory Systems. Before Joining DLS, Michelle worked for 2 years as an ORISE Fellow in CDC’s Office for Public Health Preparedness and Response, working on many aspects of disaster preparedness, supporting evaluations of an agency-wide personal preparedness intervention. Prior to CDC, Michelle worked in cardiovascular biomedical research.

      Anja Minnick is the Associate Director of Evaluation in the Division of Laboratory Systems at the Centers for Disease Control and Prevention (CDC). She has over 14 years of health and international development experience with a focus on monitoring and evaluation, strategic planning, organizational development, financial and information management, and supply chain management for HIV/AIDS. Minnick also has experience managing complex HIV/AIDS portfolios ranging in value from annual budgets of $88 million to $500 million. 

      Workshop 28: Using Process Maps and Other Visual Tools in Program Evaluation

      Offered: Tuesday, June 6, 9:00 a.m. - 12:00 p.m.; Wednesday, June 7, 9:00 a.m. - 12:00 p.m.

      Speaker: Val Carlson

      Level: Beginner to Intermediate

      Shining a bright light on process implementation (such as patient flow within a clinic, the steps to organize and lead a community meeting, or approvals required to publish study results) can illuminate bottlenecks, redundancies, and other areas for improvement. Identifying these opportunities, often as a part of process evaluation, makes it easier to propose changes that will lead to better project outcomes and performance improvement over time. This session will focus on process maps and other visual tools that are most useful in steps 4 and 5 of CDC’s 6-Step Evaluation Framework (gather credible evidence and justify your conclusions), although they can be used at any point in an evaluation project.

      You will learn how to:

      • When process maps and other visual tools can best contribute to program evaluation
      • How to use at least three process mapping tools to identify areas for improvement in your projects (spaghetti diagram, flow chart, and value stream map)
      • Where to find additional resources on process mapping and performance improvement
      • If time and audience interest allows, additional tools such as swim lanes, run charts, 2x2 matrices, Gantt charts, and/or critical paths may be covered briefly

      Audience: All participants; background in performance improvement or experience in using CDC’s 6-Step Evaluation Framework are not required.

      Valeria P. Carlson is a Public Health Advisor in the Program Performance and Evaluation Office at CDC. She is a subject matter expert in quality improvement, performance improvement, process analysis, and performance management. Her current responsibilities include developing training materials and providing technical assistance for CDC offices interested in building their capacity to use data to drive decision-making. Ms. Carlson also has a background in public health systems, public health accreditation, community health assessment, communications, and public health governance, with experience at local, state, national, and international organizations.

      Workshop 29: Using Evaluation to Develop Equitable Solutions to Community Issues through the Lens of Diversity and Inclusion

      Offered: Monday, June 5, 1:00 p.m. - 4:00 p.m.; Tuesday, June 6, 1:00 p.m. - 4:00 p.m.

      Speaker: Sylvia Burgess; Karen Terrell Jackson; Forrest Toms

      Level: Beginner

      Understanding evaluation for diversity and inclusion enables organizations to overcome organizational and systemic barriers to develop equitable solutions to community issues.  This workshop will engage participants in understanding evaluation through a diversity and inclusion lens. Facilitators will walk participants through the Diversity and Inclusion Model (Toms & Burgess, 2014) with an emphasis on evaluating outcomes, programs, and implementation. The course format is geared to experiential learners. Organizational and program data and case study examples will be used. Participants should feel free to bring their own program data to explore during this course.

      You will learn:

      • Strategies for understanding evaluation through the lens of diversity and inclusion
      • Experience application of evaluation for diversity and inclusion through hands-on case study learning
      • Tools for evaluating and assessing for diversity and inclusion, removing institutional and system barriers to equity
      • Strategies for being inclusive Recognizing D & I challenges
      • How to use organizational data to build equitable solutions

      Audience: Evaluation practitioners  in all sectors.

      Sylvia W. Burgess is the Vice-President of Operations and Senior Partner with One Step at a Time Consulting Services for leadership training and development. Dr. Burgess is a facilitator of team training and professional development for team building through classroom and simulation activities, strategy, and self-improvement subject matters. She has over 18 years of experience in teaching, facilitation, development and curriculum design in leadership development, diversity and cultural competency, community engagement, team building and conflict resolution. Dr. Burgess is a published author in the areas of spiritual capital and community engagement.  She received BA in Speech and Language Pathology and MPA from University of North Carolina Greensboro.  She earned a Ph.D. in Leadership Studies from North Carolina A &T State University. She has proven leadership and senior management experience in business, operational management, and training and with the Center for Creative Leadership. Dr. Burgess has served as an adjunct professor at North Carolina A&T State University in the Leadership Studies doctoral program.  Dr. Burgess is an active member in the community serving in several service roles. 

      Karen Terrell Jackson is Principal Evaluation Consultant and Owner of Katalyst Innovative Consulting Services. Dr. Jackson has experience in the fields of K-12 and higher education, social science research, evaluation, and policy analysis. Karen combines her early career experience in the physical sciences and STEM education with expertise in evaluation and research to empower the organizations she works with to use data in order facilitate meaningful change. Her more than 15 years as a successful mathematics instructor and almost 9 years as an academic support director at a community college provided her the opportunity to gain extensive experience in the development, facilitation, training and implementation of educational programs. She has also successfully worked with inter-disciplinary teams to assist various non-profit organizations with strategic planning, design, and using data to make meaningful policy decisions. Some evaluation and research focus areas include leadership, poverty education, STEM education, board professional development, and general program and institutional redesign. Dr. Jackson’s experience also includes peer-reviewed publications and conference presentations that focus on data-based decision making, using technology in mathematics education, diversity and inclusion, and cultural transformations within an organization. Organizations she has worked with most recently include Custom Evaluation Services, Inc., Shepherd Higher Education Consortium on Poverty, The Greensboro Police Department, MDC, and The United Way of Chelan and Douglas Counties. As a professor in Leadership Studies at North Carolina A&T State University Dr. Jackson teaches online and face-to-face graduate quantitative research methods course and research design courses. She has a B.S. in Chemistry and M.Ed. in mathematics and curriculum and instruction from The University of Southern Mississippi and a Ph.D. in educational research and policy analysis from North Carolina State University.

      Forrest D. Toms is as an Associate Professor in the Department of Leadership Studies Doctoral Program at N C A&T State University. His current research is in the areas of leadership development with faith-based/community leaders around spiritual capital and civic engagement.  Dr. Toms is also a Senior Partner with One Step at a Time Consulting.  Toms is recognized nationally for his systems approach and processes related to diversity and cultural competency in the education, health and mental health fields.  He has published articles, produced books and multimedia training products in the areas of diversity, cultural competency and community engagement.  Dr. Toms received his BS and MA degrees in Psychology from Middle Tennessee State University, Murfreesboro, Tennessee; and a PhD in Developmental Psychology from Howard University, Washington, DC. 

      Workshop 30: Strategies for Interactive Evaluation Practice — An Evaluator's Dozen

      Offered: Monday, June 5, 9:00 a.m. - 12:00 p.m.; Monday June 5, 1:00 p.m. - 4:00 p.m.

      Speaker: Laurie Stevhan

      Level: Intermediate

      In its many forms, evaluation practice requires evaluators to be skilled facilitators of interpersonal interactions. Whether you are completely in charge, working collaboratively with program staff, or coaching individuals conducting their own study, you need to interact with people throughout the course of an evaluation. This workshop will provide practical frameworks and strategies for analyzing and extending your own practice. Through presentation, demonstration, discussion, reflection, and case study, you will consider and experience strategies to enhance involvement and foster positive interaction in your evaluation practice.

      You will learn how to:

      • The three frameworks that underpin interactive evaluation practice (IEP)
      • Rationales for engaging clients/stakeholders in various evaluation tasks
      • Interactive strategies to facilitate meaningful involvement, including voicing variables, cooperative interviews, making metaphors, data dialogue, jigsaw, graffiti/carousel, cooperative rank order, and others
      • Strategic applications useful in your own evaluation context

      Audience: Evaluation practitioners in all sectors

      Laurie Stevahn is a professor in the Educational Leadership doctoral program in the College of Education at Seattle University where she teaches research/evaluation, leadership, and social justice courses. She has over 30 years of experience as an educator, evaluator, and researcher specializing in creating conditions for cooperative interaction, constructive conflict resolution, participatory evaluation, and evaluation capacity building. Laurie, along with Jean A. King, a professor and director of Graduate Studies in the Department of Organizational Leadership, Policy, and Development at the University of Minnesota, are co-authors of Interactive Evaluation Practice: Mastering the Interpersonal Dynamics of Program Evaluation (Sage, 2013) and Needs Assessment Phase III: Taking Action for Change (Sage, 2010).

      Workshop 31: Retrospective Pre-Post Methodology

      Offered: Monday, June 5, 1:00 p.m. - 4:00 p.m.; Tuesday, June 6, 1:00 p.m. - 4:00 p.m.

      Speaker: Tony Lamb; Edgar Valencia Acuna

      Level: Advanced Beginner

      The confluous of the necessity to measure change and the efficiency of self-report has led to the popularity of using self-report to measure change in social sciences and human service research and evaluation. Unfortunately, self-report pretesting can introduce response shift, treatment sensitization and carryover biases, and, pretest data can be logistically difficult to obtain. Consequently, researchers resort to gathering self-report data about pretest status or change retrospectively at post-test time. This approach can involve asking participants to estimate (1) post and pre-intervention status (post + retrospective pretest), (2) post-intervention status and amount of change (post + perceived change), and (3) change from pre-test to post-test (perceived change). Since all these retrospective methods rely on self-report, involve measuring change, and not all of them require measurement of pre-intervention condition, I choose to refer to this approach as Retropsective Self-Report of Change (RSRC), as opposed to retrospective pretest method or then-pretest method, as used in the literature.

      While the RSRC approach effectively eliminates the aforementioned response biases associated with pretesting. it nonetheless introduces unique biases above and beyond the typical self-report biases; due to the fact that at the time of estimating pre-test status, (1) particpants have access to the post-test data, which encourages them to use the anchor-and-adjust heuristic that can result in over- or under-estimation, and (2) since participants have to recall their pretest status, their estimates can be distorted by memory error. Among the three methods mentioned above, the most commonly used RSRC method is the post + retrospective pretest. Thus far, very limited research effort has been devoted to examining method specific biases and moderating variables in this and the other two RSRC methods.

      You will learn:

      • Definition and comparison of the three RSRC methods
      • Potential biases pertaining to self-report in general and RSRC in specific
      • Item presentation moderating variables in the post + retrospective pretest method
      • Direction for future research and development

      Audience: Attendees with some background in evaluation, but who desire an overview and an opportunity to examine challenges and strategies in using self-report to measure change.

      Tony CM Lam, is an Associate Professor with the Ontario Institute for Studies in Education (OISE) at the U. of Toronto. His specialization is applied measurement and evaluation. Currently, his research interest is focused on self-report errors and the use of self-report in program evaluation.

      Workshop 32: Using Technology to Enhance Applied Research & Evaluation

      Presented in Partnership with The Evaluation Institute (TEI)

      Offered: Monday, June 5, 1:00 p.m. - 4:00 pm; Tuesday, June 6, 1:00 p.m. - 4:00 p.m.

      Speaker: Tarek Azzam

      Level: All

      This workshop will focus on a range of new technological tools and examine how they can be used to improve applied research and program evaluations. Specifically, we will explore the application of free or inexpensive software to engage clients and a range of stakeholders,  formulate and prioritize research and evaluation questions, express and assess logic models and theories of change, track program implementation, provide continuous improvement feedback, determine program outcomes/impact, and to present data and findings. Participants will be given information on how to access tools such as Crowdsourcing, data visualization, and interactive conceptual framing software to improve the quality of their applied research and evaluation projects.

      You will learn how to:

      • Apply a range of free and affordable technologies to your evaluation practice
      • Use technology to effectively engage clients and stakeholders
      • Access a wide variety of technological tools for crowdsources, data visualize and conceptual framing

      Audience: Evaluation practitioners who focus on applied research and are interested in incorporating new technlogies into their practices.

      Tarek Azzam, PhD, is Director of The Evaluators’ Institute and Associate Professor at the Division of Behavioral and Organizational Sciences, Claremont Graduate University. Azzam’s research focuses on developing new methods suited for real world evaluations. These methods attempt to address some of the logistical, political, and technical challenges that evaluators commonly face in practice. His work aims to improve the rigor and credibility of evaluations and increase its potential impact on programs and policies. Azzam has also been involved in multiple projects that have included the evaluation of student retention programs at the K-12 and university level, Science Technology Engineering Math education programs, pregnancy prevention programs, children’s health programs, and international development efforts for the Rockefeller and Packard Foundations. Tarek Azzam is the current program chair of the Research on Evaluation TIG, and member of AEA’s recruitment committee. He was also the former program chair of the Theories of Evaluation TIG, and served as chair of AEA’s awards committee.

      Workshop 33: Facilitating Effective Evaluation Planning

      Presented in Partnership with The Evaluation Institute (TEI)

      Offered: Monday, June 5, 9:00 a.m. - 12:00 p.m.; Tuesday, June 6, 9:00 a.m. - 12:00 p.m.

      Speaker: Tessie Catsambas

      Level: All

      Evaluation planning is frequently the first opportunity for evaluators and clients to put their evaluations on a strong footing. Especially in the case of evaluations commissioned by the US Government or international organizations where the competitive procurement process does not allow informal interactions and dialogue about the terms of reference, this is the first time evaluator and client will sit down to discuss the evaluation and agree on a way forward. This workshop will focus on the desirable outcomes of evaluation planning, and suggest ways to design, organize and facilitate planning meetings with the client, key stakeholders and the evaluation team. The workshop will demonstrate strategies to uncover assumptions, explore client expectations, context, aspirations, concerns, and intended evaluation use, and then relate back to the terms of reference in the evaluation contract—especially the level of effort and cost. Well begun is half done; through interactive exercises, checklists and case examples, this workshop will prepare participants to begin their evaluations well: with enthusiasm and buy in, as well as feet planted well in the contract and the practical requirements of evaluation administration.

      Anastasia (Tessie) Tzavaras Catsambas is the founder and CEO/CFO of EnCompass LLC, a 17-year-old organization that provides services in evaluation, learning, leadership and organizational development. Ms. Catsambas brings 30 years’ experience in planning, evaluation and organizational development. She has managed large-scale, international evaluations, delivered training in different aspects of evaluation, and advocated for evaluation at global level. She has taught courses such using Appreciative Inquiry in evaluation, Outcome Mapping in logic models, participatory approaches for evaluation, developing dashboards for program monitoring, and evaluation use in advocacy and leading change. Catsambas is an innovator and practioner in appreciative evaluation, a methodology that incorporates the systematic study of successful experiences in evaluation, and has co-authored a book entitled, Reframing Evaluation Through Appreciative Inquiry (Sage Publications 2006). In addition to her work with EnCompass, Ms. Catsambas is a faculty member at The Evaluators’ Institute (TEI) where she teaches courses on Project Management and Oversight for Evaluators.

      Workshop 34: UFO’s, Bigfoot and Evidence-based Programs — What Counts as Proof for Program Development and Evaluation

      Offered: Monday, June 5, 9:00 a.m. - 12:00 p.m.; Tuesday, June 6, 1:00 p.m.- 4:00 p.m.

      Speaker: Michael Schooley; Joanna Elmi

      Level: Advanced Beginner

      Regardless of whether you are questioning the existence of UFO’s or trying to identify evidence-based programs, one tenet holds true in program evaluation: “absence of proof is not necessarily proof of absence.” In an era with a strong emphasis on evidence-based programs and targeting investments toward highly effective strategies, the need to appropriately value evidence to evaluate programs is paramount.  However, there is a broad range of what can count as evidence when assessing program merit. This workshop will present a continuum of evidence and framework for assessing evidence to determine best practices. Approaches to assessing and building evidence will be covered, with the relative strengths and weaknesses of various approaches discussed. Participants will have the opportunity to apply the approach to a case example.

      You will learn how to:

      • Continuum of evidence
      • Approaches to assessing extant evidence through review and rating
      • How to build evidence and determine best practice
      • Approaches for translating evidence for different stakeholders

      Audience: This course is designed for individuals who are looking for practical ways to assess, build, and use evidence for decision making.  All participants should have a basic knowledge of program evaluation.

      Michael Schooley is Chief of the Applied Research and Evaluation Branch at the Division for Heart Disease and Stroke Prevention, Centers for Disease Control and Prevention (CDC). Michael has been working with CDC for over 20 years, focusing on program evaluation, performance measurement, policy research, and research translation.

      Joanna Elmi is a Health Scientist in the Evaluation and Program Effectiveness Team of the Applied Research and Evaluation Branch at the Division for Heart Disease and Stroke Prevention, Centers for Disease Control and Prevention (CDC). She has more than 10 years of evaluation experience at CDC and her focus is on program evaluation, building evaluation capacity, and increasing practice based evidence.

      Workshop 35: Assess before you invest — Systematic Screening and Evaluability Assessments

      Offered: Monday, June 5, 9:00 a.m. - 12:00 p.m.; Monday, June 5, 1:00 p.m. - 4:00 p.m.

      Speakers: Aisha Tucker-Brown, Rachel Davis, Kincaid Lowe Beasley

      Level: Advanced Beginner

      Evaluation resources are always scarce and deciding where to invest in evaluation implementation is not an easy task.  Implementing systematic screening and assessments (SSA) can be used as a strategy to help make those decisions.  Rather than risk the entire evaluation budget on an assumption that a program is being implemented with fidelity and is ready for evaluation, implementing a system that includes expert panel review and evaluability assessments (EA) to assess and sift through programs may help in the search for evidence of evaluation readiness. This workshop will present strategies for discerning when to invest in rigorous evaluation for evidence building. The workshop will discuss the practical implementation of SSAs and EAs to mitigate investment risk in the search for building evidence for programs operating in the field.  Participants will have the opportunity to apply many of the steps through case examples.

      You will learn:

      • The utility of pre-evaluation
      • Approaches for assessing evaluability
      • How to implement a SSA/EA
      • Approaches for sharing preliminary evidence

      Audience: This course is designed for individuals who are looking for practical ways to assess evaluability and prioritize evaluation resources.  All participants should have a basic knowledge of program evaluation. 

      Aisha Tucker-Brown is a Senior Evaluator in the Evaluation and Program Effectiveness Team of the Applied Research and Evaluation Branch at the Division for Heart Disease and Stroke Prevention, Centers for Disease Control and Prevention (CDC). She has more than 10 years of evaluation experience and has spent her eight-year tenure at CDC focusing on program evaluation, evaluation design and increasing practice based evidence.

      Rachel Davis is a Senior Evaluator in the Evaluation and Program Effectiveness Team of the Applied Research and Evaluation Branch at the Division for Heart Disease and Stroke Prevention, Centers for Disease Control and Prevention (CDC). She has more than 13 years of evaluation experience at CDC and nearly 18 years of experience in the field of public health focusing on chronic disease epidemiology, program evaluation, evaluation research and increasing practice based evidence.

      Kincaid Lowe Beasley is a Health Scientist in the Evaluation and Program Effectiveness Team of the Applied Research and Evaluation Branch at the Division for Heart Disease and Stroke Prevention, Centers for Disease Control and Prevention (CDC).  She has two years of experience as an evaluator at CDC and nearly 8 years of experience in the field of public health research and evaluation focusing on chronic disease prevention.