Professional Development Workshops are hands-on, interactive sessions that provide an opportunity to learn new skills or hone existing ones at Evaluation 2011.
Professional development workshops precede and follow the conference. They differ from sessions offered during the conference itself in at least three ways: 1. each is longer (either 3, 6, or 12 hours in length) and thus provides a more in-depth exploration of a skill or area of knowledge, 2. presenters are paid for their time and are expected to have significant experience both presenting and in the subject area, and 3. attendees pay separately for these workshops and are given the opportunity to evaluate the experience. Sessions are filled on a first-come, first-served basis and many usually fill before the conference begins.
Registration for professional development workshops is handled as part of the conference registration forms; however, you may register for professional development workshops even if you are not attending the conference itself (still using the regular conference registration forms - just uncheck the conference registration box).
Workshop registration fees are in addition to the fees for conference registration:
| Members | Non-members | Full-time Students | |||||||
| Early | Standard | On Site | Early | Standard | On Site | Early | Standard | On Site | |
| <=Sep 30 | <=Oct 21 | >Oct 21 | <=Sep 30 | <=Oct 21 | >Oct 21 | <=Sep 30 | <=Oct 21 | >Oct 21 | |
| Conference Registration | $175 | $215 | $265 | $255 | $295 | $345 | $90 | $100 | $110 |
| Two Day Workshop | $300 | $320 | $360 | $400 | $440 | $480 | $160 | $180 | $200 |
| One Day Workshop | $150 | $160 | $180 | $200 | $220 | $240 | $80 | $90 | $100 |
| Half Day Workshop | $75 | $80 | $90 | $100 | $110 | $120 | $40 | $45 | $50 |
Sessions that are closed because they have reached their maximum attendance will be clearly marked below the session name. No further registrations will be accepted for full sessions and we do not maintain waiting lists. Once sessions are closed, they will not be re-opened.
(1) Qualitative Methods; (2) Quantitative Methods; (3) Actionable Answers; (4) Logic Models; (5) Interactive Practice; (6) TBA; (7) Developmental Evaluation; (8) Building Evaluation Capacity
(9) Nontoxic Slideshows; (10) Hearing Silenced Voices; (11) RealWorld Evaluation; (12) Evaluation 101; (13) Intro Consulting Skills; (14) Feminist/Gender Responsive Evaluation; (15) Systems Thinking; (16) Propensity Score Matching
(17) Logic Models +; (18) Longitudinal Data Analysis; (19) Introduction to GIS; (20) Transformative Mixed Methods; (21) Multiple Regression; (22) Dissertation Proposal; (23) Utilization-Focused Evaluation; (24) Appreciative Inquiry; (25) Case Study Evaluation (Full); (26) Theory-Driven Evaluation; (27) Participatory Program Implementation; (28) Quasi-experimental Methods; (29) Effect Size and Association Measures; (30) Multilevel Models; (31) Conflict Resolution; (32) Qualitative Research Strategies; (33) Logic of Evaluation; (34) Survey Design; (35) Concept Mapping; (36) Focus Group Moderation; (37) Performance Measurement Systems
(38) Effective Reporting; (40) Basics of Sampling; (41) Scoring Performance Assessments; (42) Intermediate Consulting Skills (Full)
Half Day Workshops, Wednesday, November 2, 12 PM to 3 PM
(43) Empowerment Evaluation; (44) Fun & Games with Logframes; (45) Cost-Effectiveness Analysis of HHS; (46) Waawiyeyaa (Circular) Evaluation Tool; (47) Desktop Software; (48) Theories and Frameworks of Practice
(50) Introduction to PSPP (51) Evaluate Systems Change (52) Purposeful Program Theory; (53) Assessing ROI in HHS
1. Qualitative Methods in Evaluation
New and experienced qualitative researchers alike often ask: "Is my approach to qualitative research consistent with core principles of the method?" Evaluators who integrate qualitative methods into their work are responsible to ensure this alignment.
This session aims to help you become a strong decision maker through the life of a qualitative research project. This process is facilitated by attention to the following questions:
You will learn:
Raymond Maietta is President of ResearchTalk Inc., a qualitative research consulting and professional development company. Lessons learned from 15 years of work with qualitative researchers in the fields of evaluation, health science and the social sciences informs a book he is completing titled, Sort and Sift, Think and Shift: A Multi-dimensional Approach to Qualitative Research.
Session 1: Qualitative Methods
Scheduled: Monday and Tuesday, October 31 and November
1, 9 AM
to 4 PM
Level: Beginner, no prerequisites
2. Quantitative Methods for Evaluators
Quantitative data offers opportunities for numerical descriptions of populations and samples. The challenge is in knowing which analyses are best for a given situation. Designed for the practitioner needing a refresher course and/or guidance in applying quantitative methods to evaluation contexts, the workshop covers the basics of parametric and nonparametric statistics, as well as how to report your findings.
Hands-on exercises and computer demonstrations interspersed with mini-lectures will introduce methods and concepts. The instructor will review examples of research and evaluation questions and the statistical methods appropriate to developing a quantitative data-based response.
You will learn:
Katherine McKnight applies quantitative analysis as Director of Program Evaluation for Pearson Achievement Solutions and is co-author of Missing Data: A Gentle Introduction (Guilford, 2007). Additionally, she teaches Research Methods, Statistics, and Measurement in Public and International Affairs at George Mason University in Fairfax, Virginia.
3. Getting Actionable Answers for Real-World Decision Makers: Evaluation Nuts and Bolts that Deliver
Ever read an evaluation report and still wondered how worthwhile the outcomes really were or whether the program was a waste of money? What if evaluations actually asked evaluative questions and gave clear, direct, evaluative answers? This workshop covers 1) big-picture thinking about key stakeholders, their information needs, and the evaluative questions they need answered; 2) a hands-on introduction to evaluative rubrics as a way of directly answering those questions; 3) guidance for designing interview and survey questions that are more easily interpreted against evaluative rubrics and capture evidence of causation; and 4) a reporting structure that gets to the point, delivering direct evaluative answers that decision makers can really use.
This workshop combines mini lectures, small and large group exercises to build big picture thinking to focus the evaluation on what really matters, and the most important “nuts and bolts” concepts and tools needed to deliver actionable answers.
You will learn:
E Jane Davidson runs her own successful consulting practice, Real Evaluation Ltd, blogs with Patricia Rogers on the entertaining Genuine Evaluation Blog, and is 2005 recipient of AEA's Marcia Guttentag Award. Her popular text, Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation (Sage, 2005), is used by practitioners and graduate students around the world. Jane’s work builds on Michael Scriven’s contributions on the logic and methodology of evaluation, combined with concepts and techniques from utilization-focused and theory-based evaluation, and translated into concrete, easy-to-follow practical methodologies that can be applied in a real-world setting.
4. Logic Models for Program Evaluation and Planning
Many programs fail to start with a clear description of the program and its intended outcomes, undermining both program planning and evaluation efforts. The logic model, as a map of what a program is and intends to do, is a useful tool for clarifying objectives, improving the relationship between activities and those objectives, and developing and integrating evaluation plans and strategic plans.
First, we will recapture the utility of program logic modeling as a simple discipline, using cases in public health and human services to explore the steps for constructing, refining and validating models. Then, we'll examine how to improve logic models using some fundamental principles of "program theory", demonstrate how to use logic models effectively to help frame questions in program evaluation, and show some ways logic models can also inform strategic planning. Both days use modules with presentations, small group case studies, and debriefs to reinforce group work.
You will learn:
Thomas Chapel is the central resource person for planning and program evaluation at the Centers for Disease Control and Prevention and a sought after trainer. Tom has taught this workshop for the past four years to much acclaim.
5. Strategies for Interactive Evaluation Practice
In all of its many forms,
evaluation practice requires evaluators to be skilled facilitators of
interpersonal interactions. Whether you are completely in charge, working
collaboratively with program staff, or coaching individuals conducting their own
study, you need to interact with people throughout the course of an evaluation.
This workshop will provide theoretical grounding (social interdependence theory,
conflict theory, and evaluation use theory) and practical frameworks for
analyzing and extending your own practice.
Through presentations, discussion, reflection, and case study, you will learn and experience strategies to enhance involvement and foster positive interaction in evaluation. You are encouraged to bring examples of challenges faced in your own practice to this workshop consistently lauded for its ready applicability to real-world evaluation contexts.
You will learn:
Jean King has over 30 years of experience as an award-winning teacher at the University of Minnesota. As an evaluation practitioner, she has received AEA’s Myrdal award for outstanding evaluation practice. Laurie Stevahn is associate professor of educational leadership at Seattle University with extensive facilitation experience as well as applied experience in evaluation. The two are co-authors of Interactive Evaluation Practice: Mastering the Interpersonal Dynamics of Program Evaluation (Sage, forthcoming) and Needs Assessment Phase III: Taking Action for Change (Sage, 2010).
6. TBA
7. Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use
Developmental evaluation (DE) is especially appropriate for innovative initiatives or organizations in dynamic and complex environments where participants, conditions, interventions, and context are turbulent, pathways for achieving desired outcomes are uncertain, and conflicts about what to do are high. DE supports reality-testing, innovation, and adaptation in complex dynamic systems where relationships among critical elements are nonlinear and emergent. Evaluation use in such environments focuses on continuous and ongoing adaptation, intensive reflective practice, and rapid, real-time feedback. The purpose of DE is to help develop and adapt the intervention (different from improving a model).
This evaluation approach involves partnering relationships between social innovators and evaluators in which the evaluator’s role focuses on helping innovators embed evaluative thinking into their decision-making processes as part of their ongoing design and implementation initiatives. DE can apply to any complex change effort anywhere in the world. Through lecture, discussion, and small-group practice exercises, this workshop will position DE as an important option for evaluation in contrast to formative and summative evaluations as well as other approaches to evaluation.
You will learn:
Michael Quinn Patton is an independent consultant and professor at the Union Institute. An internationally known expert on Utilization-focused Evaluation, this workshop is based on his just published new book, Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use (Guilford, 2010).
8. Building Evaluation Capacity of Community Organizations
Are you working with local community groups (coalitions, nonprofits, social service agencies, local health departments, volunteers, school boards) that are trying to evaluate the outcomes of their work to meet a funding requirement, an organizational expectation, or to enhance their own program performance?
In this highly interactive workshop, you will practice and reflect on a variety of activities and adult learning techniques for building basic evaluation skills related to the core components of evaluation: engaging stakeholders, focusing the evaluation, data collection, data analysis and use. Try the activities out, assess their appropriateness for your own situation, and expand your toolbox. Bring your own ‘best practices' to share as we work towards building the evaluation capacity of community practitioners and organizations. This year’s workshop also will include a section on organizational capacity building that goes beyond individual competencies and skills – strategies to build resources and support and the organizational environment for sustaining evaluation.
You will learn:
Ellen Taylor-Powell is widely recognized for her work in evaluation capacity building. Her 20 years in Extension have focused continuously on evaluation training and capacity building with concentration on individual, team, and organizational learning.
9. Nontoxic Slideshows: Practical Methods for Improving Evaluation Communication
"Death by Powerpoint" won't literally kill your audience. But it will cause them to check their phone messages, flip through the conference program, and fall asleep. In this workshop, attendees will learn the science behind good slideshows and will leave with direct, pointed changes that can be immediately administered to their own conference presentations. Beyond the scope of the conference, the workshop will address principles of presentation design that support legibility, comprehension, and retention of our evaluation work in the minds of our stakeholders.
Grounded in visual processing theory, the principles will enhance attendees' ability to communicate more effectively with peers, colleagues, and clients through a focus on the proper use of color, placement, and type in presentations. Attendees are strongly encouraged to maximize the value of this highly interactive workshop experience by bringing printouts of slides they intend to present at the conference.
You will learn:
Stephanie Evergreen's dissertation work examines the role of graphic design in evaluation communication. She is the founding chair of AEAs Data Visualization and Reporting TIG and in that role I has started various initiatives to enhance communication among fellow evaluators. An excellent facilitator, recent feedback from an AEA Coffee Break Webinar left attendees informed, excited, and wanting more!
Session 9: Nontoxic Slideshows
Prerequisites: You should regularly develop slideshows for evaluation work
and be comfortably able to navigate slideshow software unassisted.
Scheduled: Tuesday,
November 1, 9 AM
to 4 PM
Level: Intermediate
While evaluators understand the importance of multiple stakeholder perspectives, many struggle with how to ensure the participation of those traditionally 'without a voice,' vulnerable or disenfranchised populations. For example, children and youth, persons with disabilities, or those having emerging literacy in majority languages, hold important views regarding the programs, services, and situations which affect them, but their perspectives are not always included.
This workshop will be grounded in theory, will explore ethical considerations, and will be highly participatory and interactive. Stemming from a rights-based approach, the workshop will explore the why and how of including traditionally disenfranchised populations in evaluation. Through their work in Canada and internationally, the facilitators will share a variety of techniques and how these can be coupled with more traditional methods. You will be introduced to the use of rubrics, case studies, visual methods, mapping, and learning walks as tools for eliciting often-silenced voices.
You will learn:
Linda Lee and Larry Bremner of Proactive Information Services are experienced facilitators and have worked in the field of evaluation for over 30 years. Their practice has including working with those from rural, immigrant, and native communities, in North America as well as in Europe, Asia, and South America, in ways that ensure that evaluation approaches and methods are culturally, socially, and developmentally appropriate.
Session 10: Hearing Silenced Voices
Prerequisites: Basic knowledge of and experience with qualitative methods,
and experience working with marginalized or disenfranchised communities or
populations
Scheduled: Tuesday,
November 1, 9 AM
to 4 PM
Level: Intermediate
11: What's New in RealWorld Evaluation?
In graduate school, you might have learned how to do research using randomized control trials. Then you were asked to design an evaluation of a “real” program. But, what if there was no comparative baseline data on the project participants and people from the same population had not been randomly assigned to treatment and control groups? Now, what if the client won’t allocate sufficient time – or money – to conduct what you feel would be a properly rigorous evaluation? How can you conduct adequately valid evaluations under such circumstances?
Welcome to the real world of program evaluation! This workshop will explore the approaches advocated in the new 2nd edition of the popular RealWorld Evaluation book. The authors will share personal examples from their own extensive international experiences and, through participatory processes, attendees will engage with techniques that help evaluators and clients ensure the best quality evaluations possible in spite of RealWorld constraints.
You will learn:
Jim Rugh has more than 45 years of experience in international development and for 12 years headed the evaluation department of CARE International. Michael Bamberger spent a decade working with NGOs in Latin America, almost 25 of them working on evaluation with the World Bank. They, along with Linda Mabry, first co-authored RealWorld Evaluation: Working Under Budget, Time, Data and Political Constraints in 2006. The 2nd edition will be released in 2011, just in time for the AEA conference!
Session 11: RealWorld Evaluation
Prerequisites: Experience in conducting evaluations and pre-conference
review of background materials
Scheduled: Tuesday,
November 1, 9 AM
to 4 PM
Level: Intermediate
This workshop is designed to introduce new evaluators and persons who may be responsible organizationally for evaluation to the processes of thinking about, planning, implementing, and reporting an evaluation across a variety of settings. We'll work together to demystify the jargon, explore evaluation standards and principles that improve practice, and understand what is needed to conduct evaluations in complex settings.
A central focus of the workshop will be the inclusion of issues related to cultural competency and responsiveness in terms of evaluation thinking, purposes, planning processes and use of standards for good evaluations. Through a combination of brief didactic presentations and small group work throughout the workshop, we'll explore a framework for designing the evaluation from problem identification to use.
You will learn:
John McLaughlin of the College of William and Mary, and Donna Mertens of Gallaudet University, bring over 40 year of combined experience in the delivery of workshops and seminars focused on all facets of program evaluation. They are noted authors and active evaluators and are currently engaged in the enterprise across a variety of settings.
Session 12: Evaluation 101
Scheduled: Tuesday,
November 1, 9 AM
to 4 PM
Level: Beginner
13. Getting Started: Introductory Consulting Skills for Evaluators
Are you a program evaluator contemplating venturing out on your own? For many, that may be both an exciting but intimidating prospect. Taught by an independent consultant, his practical workshop will reveal the simple but important skills needed to be successful.
Through lecture, anecdote, discussion, small-group exercises, and independent reflection, this workshop will help participants problem solve – and develop an action plan. The teaching template combines a synthesis of management consulting literature along with evaluation and applied research processes, and entrepreneurial and small business skills. It also provides valuable samples and worksheets along with insider tips, trade secrets, and personal anecdotes to help participants identify and address the unique issues they may face. This course may go hand-in-hand with Staying the Course, which also addresses the challenges faced by independent consultants.
You will learn:
Gail Barrington has more than 25 years of practical experience. She founded Barrington Research Group, Inc. in 1985 and has conducted more than 100 program evaluation studies. In 2008, she won the Canadian Evaluation Society award for her Contribution to Evaluation in Canada. A frequent presenter, she will be the author of a new book entitled Consulting Start-up & Management, due out by SAGE in October 2011.
Session 13: Intro Consulting Skills
Scheduled: Tuesday,
November 1, 9 AM
to 4 PM
Level: Beginner
14: Tools and Techniques of Feminist and Gender Responsive Evaluation
While most development and many U.S.-based agencies are committed to gender equality, gender continues to be one of the areas inadequately addressed in many evaluations. And, conventional research tools are often not well suited for understanding how different social groups are affected by program interventions. This workshop will help evaluators recognize the impact of the lens through which evaluative decisions are made and the role that governmental, organizational, community and personal values play at every stage of the process.
Tools and techniques for developing gender sensitive conceptual frameworks, evaluation design, sampling, data collection, analysis, dissemination and use of evaluations will be presented. Case studies using different gender sensitive approaches in both a U.S. and international context will be shared and critiqued and participants will apply the techniques through hands-on activities.
You will learn:
Kathryn Bowen is Director of Program Evaluation at Centerstone Research Institute and has conducted feminist evaluations of community based prevention and treatment programs for 11 years. Michael Bamberger is an independent consultant who worked for 9 years as the Senior Sociologist in the World Bank Gender and Development Department. Tessie Catsambas is President of EnCompass LLC and brings 25 years experience in planning, evaluation and management of international programs and activities. Belen Sanz Luque is an avid trainer and facilitator with UN Women.
Session 14: Feminist/Gender Responsive Evaluation
Scheduled: Tuesday,
November 1, 9 AM
to 4 PM
Level: Beginner
15: Systems Thinking for Evaluation Practice
Systems thinking can help evaluators understand the world – in all its diversity – in ways that are practical, comprehensive, and wise. For those interested in making sense of the complex and sometimes messy situations we often encounter in practice, this workshop provides an overview of system thinking and how it can be used in evaluation. A systems approach is particularly useful in situations where rigorous rethinking, reframing, and unpacking complex realities and assumptions are required. Using systems approaches provides a broader perspective and helps the evaluator see the interconnectedness of component parts in a coordinated manner that emphasizes balance and fit. Evaluations based on systems concepts generate rich descriptions of complex, interconnected situations based on multiple perspectives that help participants build deeper meanings and understanding that can inform choices for subsequent action.
Through mini-lectures, group activities, and hands-on practice, this workshop teaches fundamental concepts of systems thinking and provides opportunities to apply learnings to everyday practice – making sense of the world, using systems to understand things better, and orienting ourselves towards the world in a way that embraces complexity and ambiguity.
You will learn:
Janice Noga, an independent consultant with Pathfinder Evaluation and Consulting, has taught graduate level courses in statistics, research methods, human learning, and classroom assessment and evaluation. Margaret Hargreaves, a senior health researcher at Mathematica Policy Research, has program evaluation methods at the graduate level and is currently serving as program co-chair for the Systems in Evaluation Topical Interest Group.
Session 15: Systems Thinking
Prerequisites: Experience designing and implementing evaluation project but
may be new to systems thinking
Scheduled: Tuesday,
November 1, 9 AM
to 4 PM
Level: Intermediate
16. Propensity Score Matching: Theories and Applications
When experimental designs are infeasible, evaluation researchers often use observational data to estimate treatment effects. Propensity score matching, used to improve covariate balance, has been gaining popularity. This workshop will introduce you to basic theories and principles, demonstrate the process, and provide the tools necessary to perform the work. We’ll demonstrate using the free statistical software program R and provide attendees with a CD containing both the R software and example data.
The workshop will include a review of experimental and non-experimental designs, an overview of commonly used matching methods, and a step-by-step demonstration as well as team activities.
You will learn:
Ning Rui from Research for Better Schools and Haiyan Bai from the University of Central Florida have written in multiple venues on the application of propensity score matching. New to workshop facilitation at AEA, they’ll bring with them their experience from facilitation at their home university and the Eastern Evaluation Research Society.
Session 16: Propensity Score Matching
Scheduled: Tuesday,
November 1, 9 AM
to 4 PM
Level: Beginner
17: Logic Models - Beyond the Traditional View: Metrics, Methods, Expected and Unexpected Change
When should we use (or not use) logic models? What kind of information can we put in logic models? What is the value of different forms and scales of models for the same program? What different uses do logic models have across a program's life cycle? What kinds of relationships can we represent? What are the relationships between logic models, metrics, and methodology? How can we manage multiple uses of logic models -- evaluation, planning advocacy; explanation versus prediction? How can we peg the detail in a model to our actual state of knowledge about a program? How does graphic design relate to information richness, and why does it matter? What choices do evaluators have for working with stakeholders to develop and use logic models?
Evaluators need to know how to respond to these questions. Through lecture and discussion, this workshop will endeavor to provide answers.
You will learn:
Jonathan Morrell of Fulcrum Corporation has produced and delivered a wide range of training workshops to professional associations, government, and private sector settings. Morell employs logic models throughout his evaluation practice and consulting work. The issue of program logic, and how to represent that logic, figures prominently in his recent book Evaluation in the Face of Uncertainty.
Session 17: Logic Models +
Prerequisites: Experience constructing logic models
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Intermediate
18: Longitudinal Data Analysis: From the Classics to Modern Structural Equation Modeling
Many evaluation studies make use of longitudinal data. However, while much can be learned from repeated measures, the analysis of change is also associated with a number of special problems. The workshop takes up these issues and reviews how traditional methods in the analysis of change, such as the paired t-test, repeated measures ANOVA or MANOVA, address these problems.
A mixture of PowerPoint presentation, group discussion, and exercises with a special focus on model specification will help us to explore LGM in contrast to more traditional approaches to analyzing change. The core of the workshop will be an introduction to SEM-based latent growth curve modeling (LGM). We will show how to specify, estimate and interpret growth curve models. In contrast to most traditional methods, which are restricted to the analysis of mean changes, LGM allows the investigation of unit specific (individual) changes over time. Towards the end of the workshop, we will discuss more recent advancements of LGM, including multiple group analyses, the inclusion of time-varying covariates, and cohort sequential designs. We'll give detailed directions for model specification, and all analyses will be illustrated by practical examples.
You
will learn:
Manuel C Voelkle is a research scientist at the Max Planck Institute in Berlin, Germany. He teaches courses on advanced multivariate data analysis and research design and research methods. Werner W Wittmann is professor of psychology at the University of Mannheim, where he heads a research and teaching unit specializing in research methods, assessment and evaluation research.
Session 18: Longitudinal Data Analysis
Prerequisites: Familiarity with structural equation models (SEM) and
regression analytic techniques.
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Intermediate
19. Introduction to GIS (Geographic Information Systems) and Spatial Analysis in Evaluation
This workshop introduces Geographic Information Systems (GIS) and spatial analysis concepts and uses in environmental/ecological and community health/public health program and policy evaluation. It defines and covers steps for undertaking vector and raster mapping projects plus challenges of doing GIS cost effectively. It will provide information about pros and cons of a variety of mapping software (some free) and how to obtain base maps and data for map contents.
Using case study examples from environmental/ecological and community health/public health content areas, we will demonstrate and you will practice designing a GIS project, setting up spatial analysis, and using GIS approaches to evaluate programs or policy initiatives. The workshop will discuss ways to involve evaluation stakeholders (e.g., staff, program clients) in mapping projects. We’ll use case studies, demonstration, small group exercises, and traditional presentation to introduce attendees to the value and use of GIS for evaluation; however, please note that this is not taught in a computer lab and there will be no hands-on work with GIS software.
You
will learn:
Both Arlene Hopkins and Stephen Maack are experienced university teachers and regularly present to adults in professional venues, in conference sessions and as workshop facilitators. Stephen Maack has a Ph.D. in anthropology with a research specialty in social change. Ms. Hopkins has an MA in education, and is also a former K-12 teacher. Both use GIS in their work as practicing evaluators.
Session 19: Introduction to GIS
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Beginner
20: Transformative Mixed Methods Evaluations
What are the implications of the transformative paradigm for the use of mixed methods directly focused on the furtherance of social justice? How can you apply the methodological implications of the transformative paradigm in the design of an evaluation? What approaches are useful for an evaluator to undertake a transformative mixed methods evaluation in diverse contexts? How can transformative mixed methods be applied to increase the probability of social justice goals being achieved? What sampling and data collection strategies are appropriate? What does it mean to address the myth of homogeneity?
We'll explore the answers to these and other questions, and the implications of positioning oneself as an advocate for social justice and human rights in an evaluation context. We'll demonstrate, discuss, and share examples of evaluations that use transformative mixed methods that focus on dimensions of diversity such as race/ethnicity, religion, language, gender, indigenous status, and disability. The philosophical assumptions of the transformative paradigm will be used to derive methodological implications that can be applied to the design of evaluations that seek to further social justice in marginalized communities.
You will learn:
Donna Mertens is a Past President of the American Evaluation Association who teaches evaluation methods and program evaluation to deaf and hearing graduate students at Gallaudet University in Washington, D.C. Mertens recently authored Transformative Research and Evaluation (Guilford). Katrina L Bledsoe is an evaluator with Education Development Center, Inc., conducting and managing evaluations in culturally complex communities nationally.
Session 20: Transformative Mixed Methods
Prerequisites: Knowledge of basic evaluation practice
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Intermediate
21. Applications of Multiple Regression for Evaluators: Mediation, Moderation, and More
Multiple regression is a powerful and flexible tool that has wide applications in evaluation and applied research. Regression analyses are used to describe relationships, test theories, make predictions with data from experimental or observational studies, and model linear or nonlinear relationships.
Issues we'll explore include selecting models that are appropriate to your data and research questions, preparing data for analysis, running analyses, interpreting results, and presenting findings to a nontechnical audience. The facilitator will demonstrate applications from start to finish with live SPSS and Excel, and then you will tackle multiple real-world case examples in small groups. Detailed handouts include explanations and examples that can be used at home to guide similar applications.
You will learn:
Dale Berger of Claremont Graduate University is a lauded teacher of workshops and classes in statistical methods. Recipient of the outstanding teaching award from the Western Psychological Association, he is also the author of "Using Regression Analysis" in The Handbook of Practical Program Evaluation.
Session 21: Multiple Regression
Prerequisites: Basic inferential and descriptive statistics, including
correlation and regression; familiarity with SPSS
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Intermediate
22: How to Prepare an Evaluation Dissertation Proposal
Developing an acceptable dissertation proposal often seems more difficult than conducting the actual research. Further, proposing an evaluation as a dissertation study can raise faculty concerns of acceptability and feasibility. This workshop will lead you through a step-by-step process for preparing a strong, effective dissertation proposal with special emphasis on the evaluation dissertation.
The workshop will cover such topics as the nature, structure, and multiple functions of the dissertation proposal; how to construct a compelling argument; how to develop an effective problem statement and methods section; and how to provide the necessary assurances to get the proposal approved. Practical procedures and review criteria will be provided for each step. The workshop will emphasize application of the knowledge and skills taught to the participants’ personal dissertation situation through the use of an annotated case example, multiple self-assessment worksheets, and several opportunities for questions of personal application.
You will learn:
Nick L Smith is the co-author of How to Prepare a Dissertation Proposal (Syracuse University Press) and a past-president of AEA. He has taught research and evaluation courses for over 20 years at Syracuse University and is an experienced workshop presenter. He has served as a dissertation advisor to multiple students and is the primary architect of the curriculum and dissertation requirements in his department
Session 22: Dissertation Proposal
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Beginner
23. Utilization-Focused Evaluation
Evaluations should be useful, practical, accurate and ethical. Utilization-Focused Evaluation is a process that meets these expectations and promotes use of evaluation from beginning to end. With a focus on carefully targeting and implementing evaluations for increased utility, this approach encourages situational responsiveness, adaptability and creativity. This training is aimed at building capacity to think strategically about evaluation and increase commitment to conducting high quality and useful evaluations.
Utilization-Focused evaluation focuses on the intended users of the evaluation in the context of situational responsiveness with the goal of methodological appropriateness. An appropriate match between users and methods should result in an evaluation that is useful, practical, accurate, and ethical, the characteristics of high quality evaluations according to the profession's standards. With an overall goal of teaching you the process of Utilization-Focused Evaluation, the session will combine lectures with concrete examples and interactive case analyses.
You will learn:
Michael Quinn Patton is an independent consultant and professor at the Union Institute. An internationally known expert on Utilization-focused Evaluation, this workshop is based on the newly completed fourth edition of his best-selling evaluation text, Utilization Focused Evaluation: The New Century Text (SAGE).
Session 23: Utilization-Focused Evaluation
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Beginner
24: Reframing Evaluation Through Appreciative Inquiry
In this one-day workshop, participants will be introduced to Appreciative Inquiry and will explore ways in which it may be applied in their own evaluation work.
You will experience various phases of Appreciative Inquiry by using appreciative interviews to focus an evaluation, to structure and conduct interviews, and to develop indicators. In skills building sessions, you will experience the application of "reframing" in a practical and grounded way. Through real-world case examples, practice case studies, exercises, discussion and short lectures, participants will identify how to incorporate AI into their evaluation contexts.
You will learn:
Tessie Tzavaras Catsambas is President of EnCompass LLC and brings 25 years experience in planning, evaluation and management of international programs and activities. Hallie Preskill is the Executive Director of FSG's Strategic Learning & Evaluation Center, and works on a variety of evaluation and learning projects that span multiple sectors and program areas. Catsambas and Preskill are co-authors of the recent book Reframing Evaluation Through Appreciative Inquiry.
Session 24: Appreciative Inquiry
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Beginner
25: Designing and Conducting Case Study Evaluation
This session is now full. There is no waiting
list for the session. Please choose an alternative.
Case studies are inherently complex, investigating a case in depth and from many angles to meet the interests of a range of stakeholders. This workshop will explore how to design and conduct rigorous case study evaluation that offers unique insights into multifaceted policies and programs. Specific issues to be covered include how to establish validity and credibility in case study, ethics and politics in the field, how to generalize from the case; theorize within the case; and how to get beyond the perception of case study as storytelling, important though this is, to offer critique of programs and policies.
The workshop will be conducted in four sessions with an interactive pedagogy, including short presentations, critique, dialogue, and group activities. It will start by examining the purposes, types, and justification for case study evaluation and then focus on how to plan and design case studies that produce rigorous, valid knowledge. In the afternoon the first session will be on analysis, interpretation and generalization; the second on reporting case studies in diverse ways for different audiences to maximize their use. Ethical issues will be discussed throughout. The course aims to provide you with a strong justification for the methodology of case study evaluation and the knowledge and skills to conduct case studies that will promote in-depth understanding of complex programs and inform future policy-making.
You will learn:
Hailing from the University of Southampton, United Kingdom, Helen Simons has been teaching and conducting workshops on case study evaluation for over thirty years in higher education and policy contexts as well as conducting case study evaluations in the field. She has conducted case study evaluation and/or directed evaluation training in over 20 countries and is widely published in the field of evaluation and on the topic of case study methods, including in 2009 Case Study Research in Practice from SAGE.
Session 25: Case Study Evaluation
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Beginner
Designed for evaluators who already have basic training and experience in program evaluation, this workshop will expand your knowledge on the theory-driven evaluation approach for strengthening your practical skills. We'll explore the conceptual framework of program theory and its related evaluation taxonomy, which will facilitate effective communication between evaluators and stakeholders regarding stakeholders' evaluation needs and evaluation options to address those needs.
The workshop starts from an introduction of an evaluation taxonomy that encompasses the full program cycle, including program planning, initial implementation, mature implementation, and outcomes. We'll then focus on how program theory and theory-driven evaluation are useful in the assessment and improvement of a program at each of these stages. The workshop also covers recent developments on the integrative validity model and bottom-up approach for enhancing usefulness of evaluation.
You will learn:
Huey Chen is a Senior Evaluation Scientist at the Centers for Disease Control and Prevention (CDC). Previously he was a Professor at the University of Alabama at Birmingham. He has taught workshops, as well as undergraduate and graduate evaluation courses in universities. His 1990 book, Theory-Driven Evaluations, is considered the classic text for understanding program theory and theory-driven evaluation. His 2005 book, Practical Program Evaluation: Assessing and Improving Planning, implementation, and Effectiveness, provides a major expansion of the scope and usefulness of theory-driven evaluations.
Session 26: Theory-Driven Evaluation
Prerequisites: Basic background in logic modeling or program theory
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Intermediate
Participatory evaluation process seeks to honor the perspectives, voices, preferences, and decisions of the least powerful and the most affected stakeholders and program beneficiaries. Together, we'll apply the Participatory Action for Community Enhancement (PACE) Methodology and social factors analysis to real world examples to improve evaluation practice.
The workshop will be a participatory, facilitated exercise to assess the relationships and expectations when working with key stakeholder groups to bridge the gap between evaluator and stakeholders. As an example, we will look at a program's micro-politics to identify social factors such as gender, class, race, ethnicity, from the participants perspective and determine the role they play in a program. The facilitators will walk participants through each exercises and carry out a debrief at the end of each to clarify questions on how to conduct the exercises and identify ways the tools could be adapted to the participants' needs.
You
will learn:
Scott Yetter is a senior trainer for the Participatory Action for Community Empowerment (PACE) methodology for the past 9 years. He also designed and launched a series of multi-day workshops to train local and international staff on participatory methodologies and program implementation. Carlene Baugh is a Monitoring and Evaluation Advisor for CHF International.
Session 27: Participatory Program Implementation
Prerequisites: Community or stakeholder facilitation experience; prior
experience working with or participating in community groups
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Intermediate
28: An Introduction to Quasi-Experimental Methods
Quasi-experimental designs provide creative researchers a host of options sometimes unavailable in Randomized Controlled Trials (RCTs). The incapacity to assign participants randomly to study conditions is an admitted inconvenience, but not a death knell. Many designs are available to the quasi-experimentalist that provide evidence with causal implications are so compelling that they are practically undeniable. Quasi-experiments can be performed in contexts in which experiments cannot, owing to practical, political, or other constraints.
In this workshop, we will explore a variety of designs, ranging from simple and complex interrupted time series studies to nonrandomized pretest/posttest/control group designs to regression discontinuity approaches. Real research problems that do not lend themselves to the RCT approach will be discussed, and participants will be invited to solve them while learning about the possibilities available in the class of designs described as quasi-experimental.
You will learn:
William Crano first encountered quasi-experimental methods under the instruction of Donald Campbell. He has taught quasi-experimental methods regularly since coming to Claremont Graduate University 13 years ago, stressing the practical application of scientific knowledge. Quasi-experimental methods have been an important part of his research for many years, and he is keen to show others how these techniques can materially improve our science.
Session 28: Quasi-experimental Methods
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Beginner
29: Using Effect Size and Association Measures in Evaluation
Improve your capacity to understand and apply a range of measures including: standardized measures of effect sizes from Cohen, Glass, and Hedges; Eta-squared; Omega-squared; the Intraclass correlation coefficient; and Cramer’s V. Answer the call to report effect size and association measures as part of your evaluation results. Together we will explore how to select the best measures, how to perform the needed calculations, and how to analyze, interpret, and report on the output in ways that strengthen your overall evaluation.
Through mini-lecture, hands-on exercises, and computer-based demonstration, you will improve your understanding of the theoretical foundation and computational procedures for each measure as well as ways to identify and correct for bias.
You will learn:
Jack Barnette is Professor of Biostatistics at the University of Colorado School of Public Health. He has taught courses in statistical methods, program evaluation, and survey methodology for more than 30 years. He has been conducting research and writing on this topic for more than ten years. Jack is a regular facilitator both at AEA's annual conference and the CDC/AEA Summer Evaluation Institute. He was awarded the Outstanding Commitment to Teaching Award by the University of Alabama and is a member of the ASPH/Pfizer Academy of Distinguished Public Health Teachers.
Session 29: Effect Size and Association Measures
Prerequisites: Univariate statistics through ANOVA & understanding
and use of confidence levels
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Advanced
30: Multilevel Models in Program and Policy Evaluation
Multilevel models open the door to understanding the inter-relationships among nested structures and the ways evaluands change across time. This workshop will demystify multilevel models and present them at an accessible level, stressing their practical applications in evaluation.
Through discussion and hands-on demonstrations, the workshop will address four key questions: When are multilevel models necessary? How can they be implemented using standard software? How does one interpret multilevel results? What are recent developments in this arena?
You will learn:
Sanjeev Sridharan of the University of Toronto has repeatedly taught multilevel models for AEA as well as for the SPSS software company. His recent work on this topic has been published in the Journal of Substance Abuse Treatment, Proceedings of the American Statistical Association and Social Indicators Research. Known for making the complex understandable, his approach to the topic is straightforward and accessible.
Session 30: Multilevel Models
Prerequisites: Basic statistics
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Intermediate
31: Conflict Resolution Skills for Evaluators
Unacknowledged and unresolved conflict can undermine even the most skilled evaluators, if they are unprepared to address the underlying issues and are untrained in conflict-resolution skills. In this workshop, participants will learn the practical applications of conflict resolution theory as they apply to conflict situations in program evaluation through a hands-on, experiential approach using real-life examples from program evaluation. Eleven of the Essential Competencies for Program Evaluators (Stevahn, King, Ghere, & Minnema, 2004), falling within the Situational Analysis, Project Management, Reflective Practice, and Interpersonal Competence skills areas, stress the need for evaluators to have strong conflict-resolution and communication skills.
The workshop employs skill practice and role-plays, and you will receive feedback to enhance your skill development. This training will cover: problem-solving skills; personal conflict styles; communication, listening, and facilitation skills, and will introduce you to a streamlined process of conflict resolution that may be used with clients and stakeholders.
You will learn:
A professional trainer for more than a decade, Jeanne Zimmer has presented variations on the Basic Communication and Conflict Resolution/Mediation Skills Training to hundreds of individuals, most recently at the Minnesota Evaluation Studies Institute. Conflict-resolution trainings are customized for each group. For the past ten years Jeanne has provided basic mediation skills training for volunteer mediators, in compliance with the MN Supreme Court.
Session 31: Conflict Resolution
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Beginner
32: Qualitative Research Strategies for Deeper Understanding
In an ideal world, qualitative research provides rich insights and deep understanding of respondents' perceptions, attitudes, behaviors and emotions. However, it is often difficult for research respondents to remember and report their experiences and to access their emotions.
This practical workshop offers cutting edge approaches to conducting qualitative interviews and focus groups that have proven their effectiveness across a wide range of topics. These strategies engage research participants and help them articulate more thoroughly, thus leading to deep insights for the researcher. Through demonstration and hands-on participation, we will examine techniques that help respondents to reconstruct their memories - such as visualization, mind-mapping, diaries and storytelling; articulate their emotions, through metaphorical techniques such as analogies, collage and photo-sorts; and explore different perspectives through "word bubbles" and debate.
You will learn:
Deborah Potts is a qualitative researcher who has led thousands of focus groups and one-on-one interviews. She is co-author of Moderator to the Max: A full-tilt guide to creative, insightful focus groups and depth interviews. Deborah is the senior qualitative researcher at the research company, InsightsNow, Inc. and has taught workshops in conducting qualitative research for nearly two decades.
Session 32: Qualitative Research Strategies
Prerequisites: Basic qualitative interviewing skills
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Intermediate
33: The Rest of the Iceberg: The Logic of Evaluation
It would be a mistake to think that teaching statistics can be done by only covering descriptive statistics; one must teach inferential statistics as well, since it is the part that bears most strongly on the most common and important practical questions. We have made this mistake in evaluation, by teaching only the tools that provide the factual i.e., non-evaluative component of our results. That by itself cannot provide answers to evaluative questions such as, Does this program teach reading or basic mathematics well, compared to other options?, or What is the best way to treat police training of violent offenders?
Through lecture and discussion, this workshop provides a systematic way to identify and validate and incorporate the relevant values that fill the gap, and explores some of the problems with and consequence of doing this.
You will learn:
Michael Scriven has served as the President of AEA and has taught evaluation in schools of education, departments of philosophy and psychology, and to professional groups, for 45 years. A senior statesman in the field, he has authored over 90 publications focusing on evaluation, and received AEA's Paul F Lazarsfeld Evaluation Theory award in 1986.
Session 33: Logic of Evaluation
Prerequisites: Some knowledge of the usual social science methodologies that
are used in evaluation.
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Intermediate
Building on proven strategies that work in real-world contexts, this workshop will help you plan and execute all aspects of the survey design process. Designed for true beginners with little or no background in survey development, you will be introduced to the fundamentals of survey design and administration, and leave with tools needed to develop and improve your own surveys. Perfect for those who need a refresher on survey design or are new to the field of evaluation.
This interactive workshop will use a combination of direct instruction with hands-on opportunities for participants to apply what is learned to their own evaluation projects. We'll explore different types of surveys, the advantages and challenges associated with various administration methods, as well as how to choose the right one, and problems to avoid. You will receive also handouts with sample surveys, item writing tips, checklists, and resource lists for further information.
You will learn:
Courtney L. Malloy is on faculty at the Rossier School of Education at the University of Southern California, and is a consultant at Vital Research, a research and evaluation firm operating for more than 25 years and specializing in survey design. She have developed and administered numerous surveys and has extensive experience facilitating workshops and training sessions on research and evaluation for diverse audiences.
Session 34: Survey Design
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Beginner
Through applications of structured group concept mapping, this workshop will introduce key principles of community-centered participation in evaluation, describing and illustrating major decisions and steps in engaging stakeholders. Concept mapping is a well-tested, mixed methods methodology that integrates familiar qualitative group processes with multivariate statistical analyses to help a group describe and organize its thinking on a topic. Ideas are represented visually in a series of easy-to-read graphics that capture specific ideas generated by a group; relationships between ideas; how ideas cluster together; and how those ideas are valued.
The steps in the methodology will be illustrated with project examples. There will be a particular focus on the planning stages of a project, as the decisions at this stage are applicable to any participatory project. A secondary focus will be on the unique analyses that create a shared conceptual framework for complex, systems-based issues and represent that in easy-to-read visuals.
You will learn:
Mary Kane, Scott Rosas, and Katy Hall hail from Concept Systems, Inc, a consulting company that uses the concept mapping methodology as a primary tool in its planning and evaluation consulting projects. The presenters have extensive experience with concept mapping and are among the world's leading experts on this approach.
Session 35: Concept Mapping
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Beginner
36: Dominators, Cynics, and Wallflowers: Practical Strategies for Moderating Meaningful Focus Groups
Have you ever watched a recording of yourself moderating a focus group and said: "I should have cut him off earlier?" Have you ever had a client say: "We can't trust focus groups because one or two people dominate and contaminate the rest of the discussion?" If so, this workshop is for you.
In this engaging session, Kahle describes ten problem types of behavior commonly encountered in focus groups. He offers field-tested, practical strategies and specific techniques for preventing, managing and leveraging difficult behavior. A sometimes humorous and certainly insightful view of the impact of problem behavior on focus group research is presented by sharing real-life stories from the trenches. Come ready to describe the "participants from hell" that you have encountered and share your tips for handling difficult situations. This is a high energy, fun and interactive workshop with strategies and tactics you can start using immediately.
You will learn:
Robert Kahle is the author of Dominators, Cynics, and Wallflowers: Practical Strategies for Moderating Meaningful Focus Groups (Paramount Market Publishing, 2007). Kahle has been working on the issue of managing difficult behavior in focus groups and other small group settings since the mid 1990s.
Session 36: Focus Group Moderation
Prerequisites: Have moderated at least a couple of focus groups; understand
fundamentals of qualitative research
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Intermediate
37: A Framework for Developing High Quality Performance Measurement Systems of Evaluation
Program funders are increasingly emphasizing the importance of evaluation, often through performance measurement. This workshop explores how to develop high quality project objectives and performance measures which are critical to both good proposals and successful evaluations.
Through group discussion, mini-lectures, and practice, you will increase your understanding of the relationships between project activities and intended program outcomes through the development of logic models. This will assist in the development of more sound evaluation designs, thus allowing for the collection of higher-quality and more meaningful data. Participants will be provided with a framework with practical strategies and planning devices to use when writing project objectives and measures, and planning evaluations focused on performance measurement. This framework is useful for a wide array of programs assessing impact and evaluating program outcomes from both single-site and multi-site studies as well as for locally and federally-funded projects.
You will learn:
Mindy Hightower King and Courtney Brown are faculty members at Indiana University where they are Senior Associates directing and managing evaluations for local and state, and federal agencies, foundations, and non-profit organizations. Drawing on their extensive experience developing and implementing performance measurement systems, they have provided over 20 workshops and lectures to program staff and grantees, and individual technical assistance to at least 40 representatives of grant-receiving agencies
Session 37: Performance Measurement Systems
Prerequisites: Basic evaluation skills
Scheduled: Wednesday,
November 2, 8 AM
to 3 PM
Level: Intermediate
38:
An Executive Summary is Not Enough: Effective Reporting Techniques
for Evaluators
As an evaluator you are conscientious about conducting the best evaluation possible, but how much thought do you give to communicating your results effectively? Do you consider your job complete after submitting a lengthy final report? Reporting is an important skill for evaluators who care about seeing their results disseminated widely and recommendations actually implemented.
Drawing on current research, this interactive workshop will present an overview of three key principles of effective reporting and engage participants in a discussion of its role in effective evaluation. Participants will leave with an expanded repertoire of innovative reporting techniques and will have the opportunity to work on a real example in groups.
You will learn:
Kylie Hutchinson has served since 2005 as the trainer for the Canadian Evaluation Society's Essential Skills Series (ESS) in British Columbia. Her interest in dissemination and communications stems from twenty years of experience in the field of evaluation.
Session 38: Effective Reporting
Scheduled: Wednesday,
November 2, 8 AM
to 11 AM
Level: Beginner
39: How to Improve Data Quality of Mixed Method Evaluation Designs
40:
Basics of Sampling for Evaluators
Evaluators are frequently called upon to analyze population data. But, performing such tasks requires a basic understanding of the principles of sampling. This workshop targets evaluators who do not have significant experience in quantitative analysis but do have an interest in better understanding this fundamental technique. Beginning with clarifying the scientific language and operational lingo of the field, participants will review the basic approaches and identify the benefits and pitfalls of each. Using illustrations from domestic and international projects to explore the relationship between program implementation and sampling methodology and the role that the evaluator plays in facilitating practical data collection and analysis, attendees will discuss how the conclusions drawn can vary as a consequence of different sampling protocols.
From introduction and demonstration to interactive breakout exercises, participants will apply sampling on Real World examples and translate from “statistics” into English. Useful resources and an annotated bibliography will be distributed.
You will learn:
Michael Cohen holds a doctorate in Developmental Biology and initially used the techniques described in this workshop to explain stochastic elements in the visual processing system. By translating this knowledge to the evaluation of international development programs, he has had the opportunity to design complex sampling paradigms around the world.
Session 40: Basics of Sampling
Scheduled: Wednesday,
November 2, 8 AM
to 11 AM
Level: Beginner
41:
Scoring Performance Assessments
Evaluators increasingly use performance assessments in the review of program outcomes. The reliability of scores from these performance assessments, meanwhile, has been improved by training raters to more effectively use scoring guides or rubrics. This workshop provides an overview of scoring methods that improve the reliability and accuracy associated with performance assessments.
This workshop will highlight how to develop scoring materials, how to train raters and how to conduct a scoring session while sharing research across disciplines and field to describe scoring methods that participants can apply to improve their own assessments. Examples of scoring guides, anchors, and monitoring reports will be shared along with new developments in the field.
You will learn:
Robert Johnson, a professor at the University of South Carolina, leads workshops at conferences of the Association of Test Publishers and the American Educational Research Association and is author of Assessing Performance: Developing, Scoring, and Validating Performance Tasks published by Guilford Publications. Min Zhu, also with USC, is a co-presenter.
Session 41: Scoring Performance Assessments
Scheduled: Wednesday,
November 2, 8 AM
to 11 AM
Level: Beginner
42: Staying Power: Intermediate Consulting Skills for Evaluators
Evaluators and applied researchers may be well versed in academic research processes, but uncertain about how to handle the entrepreneurial and small business management skills needed to be successful independent consultants. For those aspiring to have a thriving practice, this workshop addresses common problems encountered in the day-to-day running of a small consulting business and identifies some of the key issues faced by consultants
Through discussion, lecture, anecdote, independent reflection and teamwork, this workshop will help participants problem solve around the ongoing challenges of a consulting practice. For those looking to fine-tune their current professional practice, a seasoned practitioner/award winner/author will share from her more than two decades of experience. This course may go hand-in-hand with Getting Started, geared toward those contemplating the prospect of going into independent consulting.
You will learn:
Gail Barrington founded Barrington Research Group, Inc. in 1985 and has conducted more than 100 program evaluation studies. She is a Certified Management Consultant and a Credentialed Evaluator. In 2008, she won the Canadian Evaluation Society award for her Contribution to Evaluation in Canada. A frequent presenter, she is author of a new book entitled Consulting Start-up & Management: A Guide for Evaluators & Applied Researchers, due out by SAGE in October 2011.
Session 42: Intermediate Consulting Skills
Scheduled: Wednesday,
November 2, 8 AM
to 11 AM
Level: Intermediate
Empowerment Evaluation builds program capacity and fosters program
improvement. It teaches people to help themselves by learning how to
evaluate their own programs. The basic steps of empowerment evaluation
include: 1) establishing a mission or unifying purpose for a group or
program; 2) taking stock - creating a baseline to measure future growth and
improvement; and 3) planning for the future - establishing goals and
strategies to achieve goals, as well as credible evidence to monitor change.
The role of the evaluator is that of coach or facilitator in an empowerment
evaluation, since the group is in charge of the evaluation itself.
Employing lecture, activities, demonstration and case examples, the workshop will introduce you to the steps of empowerment evaluation and tools to facilitate the approach.
You
will learn:
David Fetterman
hails from Stanford University and is the editor of (and a contributor to)
the recently published Empowerment Evaluation Principles in Practice
(Guilford). He Chairs the Collaborative, Participatory and Empowerment
Evaluation AEA Topical Interest Group and is a highly experienced and sought
after facilitator.
Session 43: Empowerment Evaluation
Scheduled: Wednesday,
November 2, 12 PM
to 3 PM
Level: Beginner
44: Fun & Games with Logframes
In the international development community, logic models (logframes) have become the industry standard to summarize a project/program’s design/results. At best, they can strengthen a design and at worst, they can straightjacket a project. Find out how an external paradigm can either alienate or foster local participation and demystify some of the models useful to capacity building.
The workshop – adopted by the International Federation of Red Cross and Red Crescent Societies for trainings around the world - will use interactive exercises and games for a fun, participatory approach that will help participants understand and work with 1) Objectives; 2) Indicators; 3) Means of Verification; and 4) Assumptions. There will be examples drawn from actual projects in Water Sanitation, Disaster Management, Community Health, Livelihoods, Hygiene Promotion and HIV/AIDS.
You
will learn:
Scott Chaplowe is a facilitator for IFRC and has led the Fun and Games methodology in Asia, Africa, Latin America and Europe. He has over 12 years experience in program development, management, monitoring and evaluation with the United Nations, international NGOs, and US-based nonprofit and public organizations.
Session 44: Fun & Games with Logframes
Scheduled: Wednesday,
November 2, 12 PM
to 3 PM
Level: Beginner
45. Cost-Effectiveness Analysis of Health and Human Services Programs
As the cost of health and human service programs far outpaces our willingness and ability to pay, the importance of, and demand for, measuring their economic efficiency continues to grow. This workshop focuses on building basic skills in cost-effectiveness analysis for health and human service interventions.
You will be taught the basic types of cost-effectiveness analysis and will learn the mathematical model that underlies robust economic evaluations that are crucial to decision makers. We will discuss interpretation and presentation of the results as well as the limitations inherent in this field of evaluation. The session will be interactive and you should come prepared with a program that you are familiar with that is suited for cost-effectiveness analysis. You will receive practical, problem-solving experience with this exercise and will be immediately able to use the skills learned in the workshop in your evaluation activities.
You
will learn:
Edward Broughton is responsible for a portfolio of cost-effectiveness analyses in continuous quality improvement programs in several countries in a variety of health areas including maternal and neonatal health, HIV/AIDS care and treatment and pediatric health. He has provided training on Cost-Effectiveness Analysis in the United States, Afghanistan, and China.
Session 45: Cost-Effectiveness Analysis of HHS
Prerequisites: Basic program evaluation skills
Scheduled: Wednesday,
November 2, 12 PM
to 3 PM
Level: Intermediate
46. Waawiyeyaa (Circular) Evaluation Tool: Its Use and Lessons Learned
Developed by Johnston Research Inc., a holistic evaluation tool grounded in Anishnawbe traditional knowledge was created for program providers. It’s a self-evaluation tool that allows programs to document both meaningful process and outcomes over time. It’s also a powerful learning tool that promotes growth and self-development among program participants. By applying the tool at various program-milestones, a full picture can be documented of the personal journeys of the participants in a systematic manner. The traditional knowledge tool provides a framework from which program participants can easily relate. Participants like the tool because the storytelling is driven by them through their own eyes and at their own pace.
The Waawiyeyaa Evaluation Tool uses a video that shows program providers and participants how they can use the tool to document their stories; and an electronic survey tool that uses an easy question builder to launch a survey that will be instantly integrated into a database using the Internet. This workshop will demonstrate the power of oral and visual education and share lessons from the field. Group activities will put the tool to use; takeaway materials will include a DVD, manual and a certification in the use of the Tool.
You
will learn:
Andrea L. K. Johnston, CEO, is a skilled trainer and facilitator. Johnston Research Inc. is a forerunner in utilizing new technology and media to develop culture-based evaluation tools that can properly assess and improve culture-based social programming. Since the launch of Johnston Research Inc. in 2001, Johnston has managed and/or participated in more than 60 Aboriginal specific research and evaluation projects totaling over $5 million. She currently serves on the Board of Directors of the Ontario Chapter of the Canadian Evaluation Society.
Session 46: Waawiyeyaa (Circular) Evaluation Tool
Prerequisites: Need to know basics of evaluation practice
Scheduled: Wednesday,
November 2, 12 PM
to 3 PM
Level: Intermediate
47. Off the Shelf! Using the Your Desktop Software for Surveys and Data Collection
In an environment of constrained resources, unleash the power of software sitting on your desk. Evaluators and/or organizations often can’t afford to buy specialized software for data collection and analysis tasks. This workshop shows how to use Microsoft software to enhance professional survey and evaluation efforts.
Through lecture, demonstration and discussion, this workshop will share how to utilize Word, Access and Excel to design data collection instruments and survey projects, integrate sampling frames from various sources, create data entry screens, and analyze qualitative responses in a report format.
You
will learn:
Mary Stutzman serves as Director for the Florida State University Survey Research Laboratory, which conducts a wide range of surveys including telephone, mail, and web-based designs. She has more than 30 years of teaching experience, has designed teaching materials, and participated in the design and implementation of AEA’s membership survey.
Session 47: Desktop Software
Prerequisites: Some knowledge of Word (especially Tables) and familiarity with Access would be helpful
Scheduled: Wednesday,
November 2, 12 PM
to 3 PM
Level: Intermediate
48. Using Theories and Frameworks of Evaluation Practice to Inform and Improve our Work
The majority of our evaluation theories are intended to offer principles, rationales, and organization for the procedural choices made by evaluators and to orient practitioners to the issues and problems with which they must deal (Chelimsky, 1998). These "theories" are qualitative models, points-of-view, and approaches to the process of evaluation. Discussions of evaluation theories often focus on a particular approach. In this workshop, however, participants will be offered an opportunity to discuss a range of theories comparatively, while also examining the theories that fit best with each participant's current social, political, and methodological beliefs, values, and practice.
Through lecture and exercises, the session will introduce you to different evaluation perspectives and how they can influence how we evaluate. We'll use the Evaluation Theory Tree categorization system to gain a fuller understanding of one's own framework preferences, then compare, contrast, and explore theories from different branches to identify their application at each step in designing and conducting an evaluation.
You
will learn:
Christina Christie has taught professional development workshops for many organization, including the Claremont Graduate Professional Development Workshop Series, the AEA/CDC Summer Institute and the Canadian Evaluation Society (annual meeting workshops)
. She has published both peer-reviewed and invited papers on the topic of evaluation theory.
Session 48: Theories and Frameworks of Practice
Scheduled: Wednesday,
November 2, 12 PM
to 3 PM
Level: Beginner
50. Freeware! Open-Source Statistical Analysis Software: An introduction to PSPP
Constrained by costs? If you have a shoestring budget, but want the latest and greatest, this workshop will provide of overview of PSPP. Launched in 2010, we think it’s the next big thing in statistical analysis. Join us as we lead you step by step through the process of finding, loading, using, and interpreting the output of PSPP.
This workshop will provide an overview of open source software -- where it comes from, why it exists, and how it works -- as well as touch upon the importance and benefits of open source software. Presenters will share some of their personal experiences in a lively and energetic participatory setting where they hope to alleviate any technology worries as well as generate enthusiasm for the possibilities ahead.
You
will learn:
Susan Rogers and Kristina Mycek are co-presenters who engage in active learning. Both hold graduate degrees in educational psychology and have experience in program evaluation in both educational and mental health settings. Ms. Rogers currently teaches as an assistant professor of psychology at State University of New York at Sullivan; Ms. Mycek works as an associate statistician at Consumers Union of the US.
Session 50: Introduction to PSPP
Scheduled: Sunday, November 6, 9
AM
to 12 PM
Level: Beginner
51. Integrating Systems Concepts with Mixed Research Methods to Evaluate Systems Change
Increased interest in large-scale social change has led to increased focus on the evaluation of system change initiatives. But foundations, government agencies, and social entrepreneurs find a wide array of competing approaches and methods for evaluating change – and no one method or approach is best. The quandary becomes how best to move forward, what method(s) to use and why.
Through lecture, discussion and small group activities, this workshop offers a pragmatic mixed methods approach for evaluating four basic types of systems: systems with unknown dynamics; networked systems; nested, multi-level systems; and systems with complex, non-linear dynamics. For each system type, the workshop suggests specific combinations of mixed research methods, demonstrating how they have been applied successfully in real-life evaluations. This workshop goes beyond basic lists of system concepts and evaluation approaches to provide useful, ready-made packages of mixed method evaluation designs for several common types of systems change initiatives.
You
will learn:
Margaret Hargreaves is a senior health researcher at Mathematica Policy Research with over 20 years of experience in planning, managing, and evaluating public health, healthcare and social service programs and initiatives targeting diverse low-income populations. She has widely-recognized methodological expertise in complex systems change evaluations and qualitative research methods. Heather Koball is a senior researcher at Mathematica and has extensive experience with a range of quantitative methods. Todd Honeycutt is a health researcher at Mathematica and has substantial experience in applying social network analysis to assess organization collaborations and outcomes.
Session 51: Evaluate Systems Change
Prerequisites: Basic knowledge of core system concepts, core evaluation concepts and mixed research methods
Scheduled: Sunday, November 6, 9
AM
to 12 PM
Level: Advanced
While program theory has become increasingly popular over the past 10 to 20 years, guides for developing and using logic models sometimes sacrifice contextual difference of practice in the interests of clear guidance and consistency across organizations. This session is designed for advanced evaluators who are seeking to explore ways to develop and use program theory in ways that suit the particular characteristics of the intervention, the evaluation purpose and the organizational environment.
In addition to challenges identified by participants, the workshop will use mini-lectures, exercises, and discussions to address three particularly important issues- improving the quality of the models by drawing on generic theories of change and program archetypes; balancing the tension between simple models which communicate clearly and complicated models which better represent reality; and using the model to develop and answer evaluation questions that go beyond simply meeting targets.
You
will learn:
Patricia Rogers is an experienced facilitator and evaluator, and one of the leading authors in the area of Program Theory. She has taught for The Evaluators Institute and is on faculty at the Royal Melbourne Institute of Technology.
Session 52: Purposeful Program Theory
Prerequisites: Previous experience using program theory/logic models in
evaluation; previous experience planning and undertaking evaluations
Scheduled: Sunday, November 6, 9
AM
to 12 PM
Level: Advanced
53. Assessing Returns on Investment in Health and Human Services
When considering the investment of limited resources to deliver services, decision makers are often required to study the “business case” associated with their decision. The “business case” is often defined by a return on investment (ROI) analysis, whereby the incremental economic gains of a program or intervention are divided by the cost of that program or intervention. Economic gains can be defined narrowly as business-specific financial or fiscal impacts, or more globally as local economic gains or social impacts. Either way, investors are particularly interested in knowing the value of their resources expended. Costs of a program or intervention can be defined as the program inputs or resources required to implement the service, and the monetary value of those resources.
Because many organizations have fewer resources today to serve their population, it is now more vital than ever for these groups to understand and empirically evaluate the economic impact of implementing services. Funders of these organizations may also ask for (or require) information on how resources are spent and the economic returns for those resources expended. This workshop focuses on building basic skills in benefit-cost analysis (BCA), the primary method for determining returns on investment, whereby program or policy inputs are compared to program or policy outputs defined in monetary terms.
You
will learn:
Phaedra S. Corso is currently an Associate Professor in the Department of Health Policy and Management at the University of Georgia College of Public Health. Previously, she served as the senior health economist in the National Center for Injury Prevention and Control at CDC where she worked for over a decade in the areas of economic evaluation and decision analysis, publishing numerous articles on the cost-effectiveness of prevention interventions and co-editing a book on prevention effectiveness methods in public health.
Session 53: Assessing ROI in HHS
Prerequisites: Basic understanding of statistical methods and evaluation
Scheduled: Sunday, November 6, 9
AM
to 12 PM
Level: Intermediate