Professional Development Workshops are hands-on, interactive sessions that provide an opportunity to learn new skills or hone existing ones at Evaluation 2009.
Professional development workshops precede and follow the conference. They differ from sessions offered during the conference itself in at least three ways: 1. each is longer (either 3, 6, or 12 hours in length) and thus provides a more in-depth exploration of a skill or area of knowledge, 2. presenters are paid for their time and are expected to have significant experience both presenting and in the subject area, and 3. attendees pay separately for these workshops and are given the opportunity to evaluate the experience. Sessions are filled on a first-come, first-served basis and most are likely to fill before the conference begins.
Registration for professional development workshops is handled as part of the conference registration forms; however, you may register for professional development workshops even if you are not attending the conference itself (still using the regular conference registration forms - just uncheck the conference registration box).
Workshop registration fees are in addition to the fees for conference registration:
|
Two Day Workshop |
One Day Workshop |
Half Day Workshop |
|
| AEA Members | $300 | $150 | $75 |
| Full-time Students | $160 | $80 | $40 |
| Non-Members | $400 | $200 | $100 |
Sessions that are closed because they have reached their maximum attendance will be clearly marked below the session name. No further registrations will be accepted for full sessions and we do not maintain waiting lists. Once sessions are closed, they will not be re-opened.
(1) Qualitative Methods; (2) Quantitative Methods; (3) Evaluation 101; (4) Logic Models; (5) Participatory Evaluation; (6) Consulting Skills; (7) Survey Design; (8) Building Evaluation Capacity;
(9) Internet Surveys; (10) Social Network Analysis; (11) Evaluation Methodology; (12) Extending Logic Models; (13) Consulting Contracts; (14) Evaluating Social Change Initiatives; (15) Longitudinal Analysis; (16) Exploratory Evaluation
(17) Racism in Evaluation; (18) Advocacy Evaluation; (19) Effect Size Measures; (20) Collaborative Evaluations; (21) Evaluation Dissertation; (22) Data Graphs; (24) Needs Assessment; (25) Theory-Driven Evaluation; (26) Performance Measurement Systems; (27) Organizational Collaboration; (28) Transformative Mixed Methods; (29) Utilization-focused Evaluation; (30) Multilevel Models; (31) Evaluating Complex System Interventions; (32) Politics of Evaluation; (33) Concept Mapping; (34) Using Stories; (35) Reconceptualizing Evaluation; (36) Excellence Imperatives; (37) Uncovering Context
(38) Success Stories; (39) Introduction to GIS; (40) Effective Reporting; (41) Systems Tools for Multi-site; (42) Moderating Focus Groups
Half Day Workshops, Wednesday, November 11, 12 PM to 3 PM
(43) Cost-Effectiveness; (44) Energy Program Evaluation; (45) Advanced Program Theory; (46) Empowerment Evaluation; (47) Nonparametric Statistics;
(48) Creating Surveys; (49) Conflict Resolution; (50) Focus Group Moderator Training; (51) Systems Thinking; (52) Purposeful Program Theory
Qualitative data can humanize evaluations by portraying people and stories behind the numbers. Qualitative inquiry involves using in-depth interviews, focus groups, observational methods, and case studies to provide rich descriptions of processes, people, and programs. When combined with participatory and collaborative approaches, qualitative methods are especially appropriate for capacity-building-oriented evaluations.
Through lecture, discussion, and small-group practice, this workshop will help you to choose among qualitative methods and implement those methods in ways that are credible, useful, and rigorous. It will culminate with a discussion of new directions in qualitative evaluation.
Michael Quinn Patton is an independent consultant and professor at the Union Institute. An internationally known expert on utilization-focused evaluation and qualitative methods, he published the third edition of Qualitative Research and Evaluation Methods (SAGE) in 2001.
2. Quantitative Methods for Evaluators
Quantitative data offers opportunities for numerical descriptions of populations and samples. The challenge is in knowing which analyses are best for a given situation. Designed for the practitioner needing a refresher course and/or guidance in applying quantitative methods to evaluation contexts, the workshop covers the basics of parametric and nonparametric statistics, as well as how to report your findings.
Hands-on exercises and computer demonstrations interspersed with mini-lectures will introduce methods and concepts. The instructor will review examples of research and evaluation questions and the statistical methods appropriate to developing a quantitative data-based response.
Katherine McKnight applies quantitative analysis as Director of Program Evaluation for Pearson Achievement Solutions and is co-author of Missing Data: A Gentle Introduction (Guilford, 2007). Additionally, she teaches Research Methods, Statistics, and Measurement in Public and International Affairs at George Mason University in Fairfax, Virginia.
3. Evaluation 101: Intro to Evaluation Practice
Begin at the beginning and learn the basics of evaluation from an expert trainer. The session will focus on the logic of evaluation to answer the key question: "What resources are transformed into what program evaluation strategies to produce what outputs for which evaluation audiences, to serve what purposes." Enhance your skills in planning, conducting, monitoring, and modifying the evaluation so that it generates the information needed to improve program results and communicate program performance to key stakeholder groups.
A case-driven instructional process, using discussion, exercises, and lecture will introduce the steps in conducting useful evaluations: Getting started, Describing the program, Identifying evaluation questions, Collecting data, Analyzing and reporting, and Using results.
John McLaughlin has been part of the evaluation community for over 30 years working in the public, private, and non-profit sectors. He has presented this workshop in multiple venues and will tailor this two-day format for Evaluation 2009.
4. Logic Models for Program Evaluation and Planning
Many programs fail to start with a clear description of the program and its intended outcomes, undermining both program planning and evaluation efforts. The logic model, as a map of what a program is and intends to do, is a useful tool for clarifying objectives, improving the relationship between activities and those objectives, and developing and integrating evaluation plans and strategic plans.
First, we will recapture the utility of program logic modeling as a simple discipline, using cases in public health and human services to explore the steps for constructing, refining and validating models. Then, we'll examine how to improve logic models using some fundamental principles of "program theory", demonstrate how to use logic models effectively to help frame questions in program evaluation, and show some ways logic models can also inform strategic planning. Both days use modules with presentations, small group case studies, and debriefs to reinforce group work.
Thomas Chapel is the central resource person for planning and program evaluation at the Centers for Disease Control and Prevention and a sought after trainer. Tom has taught this workshop for the past four years to much acclaim.
Participatory evaluation practice requires evaluators to be skilled facilitators of interpersonal interactions. This workshop will provide you with theoretical grounding (social interdependence theory, conflict theory, and evaluation use theory) and practical frameworks for analyzing and extending your own practice.
Through presentations, discussion, reflection, and case study, you will experience strategies to enhance participatory evaluation and foster interaction. You are encouraged to bring examples of challenges faced in your practice for discussion to this workshop consistently lauded for its ready applicability to real world evaluation contexts.
Jean King has over 30 years of experience as an award-winning teacher at the University of Minnesota. As an evaluation practitioner, she has received AEA’s Myrdal award for outstanding evaluation practice. Laurie Stevahn is a professor at Seattle University with extensive facilitation experience as well as applied experience in participatory evaluation.
6. Consulting Skills for Evaluators: Getting Started
Program evaluators who choose to become independent consultants will find that the intersection of business and research can offer tremendous personal reward but it can be both challenging and intimidating unless they have the simple but important skills required to be successful. This practical workshop addresses the unique issues faced by individuals who want to become independent
consultants, who have recently taken the plunge or who need to re-tool their professional practice.Participants will have the opportunity to explore four different skill sets that are required to support a successful evaluation consulting practice:
Through lecture, anecdote, discussion, small-group exercises, and independent reflection, this workshop will help you to problem solve around this career choice and develop an agenda for action.
Gail V. Barrington is an independent consultant who started her consulting firm, Barrington Research Group, Inc. in 1985. She has conducted over 100 program evaluation studies and has made a significant contribution to the field of evaluation through her practice, writing, teaching, training, mentoring and service. In 2008 she won the Canadian Evaluation Society award for her Contribution to Evaluation in Canada.
7. Survey Design and Administration
Designed for true beginners with little or no background in survey development, you will be introduced to the fundamentals of survey design and administration, and leave with tools for developing and improving your own surveys as part of your evaluation practice. Building on proven strategies that work in real-world contexts, the facilitators will help you to build confidence and planning and executing all aspects of the survey design process.
This interactive workshop will use a combination of direct instruction with hands-on opportunities for participants to apply what is learned to their own evaluation projects. We will explore different types of surveys, how to identify the domains included in surveys, how to choose the right one, how to administer the survey and how to increase response rates and quality of data. You will receive handouts with sample surveys, item writing tips, checklists, and resource lists for further information.
Courtney Malloy and Harold Urman are consultants at Vital Research, a research and evaluation firm that specializes in survey design. They both bring to the session extensive experience facilitating workshops and training sessions on research and evaluation for diverse audiences.
8. Building Evaluation Capacity Within Community Organizations
Are you working with community groups (coalitions, nonprofits, social service agencies, local health departments, volunteers, school boards) that are trying to evaluate the outcomes of their work to meet a funding requirement, an organizational expectation, or to enhance their own program performance?
Join us in this highly interactive workshop where you will practice and reflect on a variety of activities and adult learning techniques associated with three components of evaluation planning: focus, data collection, and communicating. Try these activities out, assess their appropriateness for your own situation, and expand your toolbox. We will draw from a compendium of practical tools and strategies that we have developed over the past years and have found useful in our own work. We encourage you to bring your own ‘best practices' to share as we work towards building the evaluation capacity of communities.
Ellen Taylor-Powell is widely recognized for her work in evaluation capacity building. Her 20 years in Extension have focused continuously on evaluation training and capacity building with concentration on individual, team, and organizational learning. She will lead a team of facilitators with extensive experience both in teaching adults and in working with community groups and agencies.
9.
Internet Survey Construction and Administration
Designed for those new to online survey design, this workshop will explore all aspects of the online design and deployment process. We'll look at selecting the right tools for the job, the basics of online survey construction, methods for generating clear and valid questions using appropriate response formats, and administration in web based contexts.
Building on mini-lectures, computer demonstrations, and hands-on offline exercises, you will increase your skills and confidence in online survey construction.
Drawing on research and personal experience, together we will also explore ways to break through the clutter to increase response rates while addressing issues of ethical practices and ensuring reliable and valid responses.You will learn:
Joel T Nadler and Nicole L Cundiff are primary trainers of web-based programs and techniques at Applied Research Consultants (ARC), a graduate student-run consulting firm at Southern Illinois University Carbondale (SIUC), and have written numerous reports utilizing web based surveys. Rebecca Weston is an associate professor of psychology at SIUC who has taught graduate and undergraduate classes in psychological measurement.
Session 9.
Internet Survey
Scheduled:
Tuesday, November
10, 9:00 AM to 4:00 PM
Level:
Beginner![]()
10. Social Network Analysis Methods for Program Monitoring and Evaluation
This workshop is full. No more registrations will be accepted for spaces in this workshop and the waiting list for the workshop is full. Please select an alternative.
Social Network Analysis (SNA) measures and illustrates knowledge networks depicting the relationships, and flow of information and ideas between individuals, groups, or organizations. It has many applications in evaluation including improving our understanding of complex programs, relationships and communications patterns among stakeholders, and examination of systems. The workshop will cover network theory, types of network data, social network generating tools, measures used in social network data, and the use of network analysis software for visual displays and analysis.
Following an introductory lecture that introduces SNA and situates it within the field of program evaluation, you will gain hands-on experience collecting and analyzing different types of network data. We'll examine a real-world case study that illustrates the applicability of SNA to program monitoring and evaluation and ensure that you leave with the resources you need to learn more.
You will learn:
Gene Ann Shelley and Holly Fisher use SNA in their work at the Centers for Disease Control and Prevention. An experienced facilitator and author, Shelley regularly serves as CDC's lead on social network projects. Fisher has served as program evaluator for SNA-focused projects and is the recipient of CDC's Director's Recognition Award and Prevention Research Branch Award for her work on the Social Networks Demonstration Project.
Session 10.
Social Network Analysis
Scheduled:
Tuesday, November
10, 9:00 AM to 4:00 PM
Level:
Beginner![]()
11. From 'What's So?' To 'So What?': A Nuts and Bolts Introduction to Evaluation Methodology
Evaluation logic and methodology is a set of principles (logic) and procedures (methodology) that guides evaluators in the task of blending descriptive data with relevant values to answer important questions about quality, value, and/or importance.
This workshop combines mini-lectures, demonstrations, small group exercises, and interactive discussions to offer a 'nuts and bolts' introduction to concrete, easy-to-follow methods for designing and conducting an evaluation. The workshop spells out a coherent approach to using evaluation methodology that draws on the best of the best, adds its own twists to fill the gaps, and includes practical nuts-and-bolts guidelines and tools that could be applied in a real-world setting.
You will learn:
E Jane Davidson runs her own successful consulting practice and is the 2005 recipient of AEA's Marcia Guttentag Award. She is the author of Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation (Sage, 2004).
Session 11.
Evaluation Methodology
Scheduled:
Tuesday, November
10, 9:00 AM to 4:00 PM
Level:
Beginner![]()
12. Extending Logic Models: Beyond the Traditional View
When should we use logic models? How may we maximize their explanatory value and usefulness as an evaluation tool? This workshop will present three broad topics that will increase the value of using logic models. First, we'll explore an expanded view of what forms logic models can take, including 1) the range of information that can be included, 2) the use of different forms and scales, 3) the types of relationships that may be represented, and 4) uses of models at different stages of the evaluation life cycle. Second, we'll examine how to balance the relationship between visual design and information density in order to make the best use of models with various stakeholders and technical experts. Third, we'll consider epistemological issues in logic modeling, addressing 1) strengths and weaknesses of 'models', 2) relationships between models, measures, and methodologies, and 3) conditions under which logic models are and are not useful.
Through lecture and both small and large group discussions, we will move beyond the traditional view of logic models to examining their applicability, value, and relatability to attendees' experiences.
You will learn:
Jonathan Morell works at Vector Research Center for Enterprise Performance and TechTeam Government Solutions, and has been a practicing evaluator for over 20 years. An experienced trainer, he brings practical hands-on examples from real-world situations that build the connection between theory and practice.
Session 12.
Extending Logic Models
Prerequisites:
Basic knowledge of program evaluation and logic model construction
Scheduled:
Tuesday, November
10, 9:00 AM to 4:00 PM
Level:
Intermediate
13: Navigating the Waters of Evaluation Consulting Contracts
Are you looking for a compass, or maybe just a little wind to send you in the right direction when it comes to your consulting contracts? Have you experienced a stormy contract relationship and seek calm waters?
This workshop combines mini lecture, discussion, skills practice, and group work to address evaluation contract issues. You will learn about important contractual considerations such as deliverables, timelines, confidentiality clauses, rights to use/ownership, budget, client and evaluator responsibilities, protocol, data storage and use, pricing, contract negotiation, and more. Common mistakes and omissions, as well as ways to navigate through these will be covered. You will receive examples of the items discussed, as well as resources informing the contract process. You are encouraged to bring topics for discussion or specific questions.
Kristin Huff is a seasoned evaluator and facilitator with over 15 years of training experience and a Master of Science degree in Experiential Education. Huff has managed consulting contracts covering the fields of technology, fundraising, nonprofit management, and evaluation, and has developed and managed more than 400 consulting contracts in the past eight years.
The Seven Cs Social Change Framework provides a method for conceptualizing, planning for, and evaluating social change initiatives. As the focus on 'systems change' and social change initiatives continues to grow within both the funding and programming communities, the need for a method of measuring the progress of these initiatives is becoming more apparent. The Seven Cs Social Change Framework provides a method that helps to bridge the divide between direct service programming and systems change, providing a systems-based approach to assessment and evaluation that can be tailored to the unique aspects of different initiatives as well as the unique aspects of different sites within a single initiative.
Using presentations, hands-on case examples, discussion, and collaborative group work, this highly interactive workshop gives you the opportunity to apply the conceptual framework provided by the Seven Cs Social Change Framework to a variety of change initiatives.
You
will learn:
Marah Moore has over 15 years of experience as a trainer and an evaluation practitioner in areas ranging from education to health and welfare to social service programs. She has worked on projects throughout the United States as well as in Uganda and Russia.
Session
14:
Evaluating Social Change Initiatives
Prerequisites: Minimum two years experience working with systems
change initiatives
Scheduled: Tuesday, November 10, 9:00 AM to 4:00 PM
Level: Intermediate
This workshop is full. No more registrations will be accepted for
spaces in this workshop and the waiting list for the workshop is full.
Please select an alternative.
Many evaluation studies make use of longitudinal data. However, while much can be learned from repeated measures, the analysis of change is also associated with a number of special problems. This workshop reviews how traditional methods in the analysis of change, such as the paired t-test, and repeated measures ANOVA or MANOVA, address these problems. Core of the workshop will be an introduction to SEM-based latent growth curve modeling (LGM). We will show how to specify, estimate and interpret growth curve models. In contrast to most traditional methods, which are restricted to the analysis of mean changes, LGM allows the investigation of unit specific (individual) changes over time. We will also discuss recent advancements of LGM, including multiple group analyses, the inclusion of time-varying covariates, and cohort sequential designs.
A mixture of PowerPoint presentation, group discussion, and exercises with a special focus on model specification will help us to explore LGM in contrast to more traditional approaches to analyzing change. We will demonstrate processes for setting up and estimating models using different software packages, and a number of practical examples along with sample output will illustrate the material.
You
will learn:
Manuel C Voelkle is a research scientist at the Max Planck Institute in Berlin, Germany. He teaches courses on advanced multivariate data analysis and research design and research methods. Werner W Wittmann is professor of psychology at the University of Mannheim, where he heads a research and teaching unit specializing in research methods, assessment and evaluation research.
Session
15:
Longitudinal Analysis
Prerequisites: Familiarity with structural equation models and
regression analytic techniques. Experience with analyzing longitudinal data
is useful but not necessary.
Scheduled: Tuesday, November 10, 9:00 AM to 4:00 PM
Level: Intermediate
This
workshop provides participants with knowledge and skills related to five
exploratory evaluation approaches: evaluability assessment, rapid-feedback
evaluation, service delivery assessment, small-sample studies, and a simplified
form of zero-base budgeting. These approaches can help produce a preliminary
estimate of program effectiveness. When used appropriately, these approaches can
be aid in designing future evaluations to improve program quality, efficiency,
and value; strengthen accountability; or inform the policy process.
Mini-lectures will introduce five approaches to exploratory evaluation.
Small group exercises and discussion will provide you with an opportunity to
increase your knowledge and skills related to the five approaches.
You
will learn:
Joe Wholey has substantial experience with exploratory evaluation approaches through work with The Urban Institute, United States Department of Health and Human Services, and United States Government Accountability Office. An internationally known expert of exploratory evaluation, he is a co-editor of the Handbook of Practical Program Evaluation (Jossey-Bass, 2004).
Session
16:
Exploratory Evaluation
Prerequisites:
Substantial working knowledge of policy analysis,
management analysis, social science research, or program evaluation.
Scheduled: Tuesday, November 10, 9:00 AM to 4:00 PM
Level: Intermediate
17. Identifying, Measuring and Interpreting Racism in Evaluation Efforts
Historically, racism has been a contributing factor to the racial disparities that persist across contemporary society. This workshop will help you to identify, frame, and measure racism's presence. The workshop includes strategies for removing racism from various evaluation processes, as well as ways for identifying types of racism that may be influencing the contexts in which racial disparities and other societal programs operate.
Through mini-lectures, discussion, small group exercises, and handouts, we will practice applying workshop content to real society problems such as identifying racial biases that may be embedded in research literature, identifying the influence of racism in the contexts of racial disparities programs, and eliminating inadvertent racism that may become embedded in cross-cultural research. This workshop will help you to more clearly identify, frame, measure, interpret, and lessen the presence of racism in diverse settings.
Pauline Brooks is an evaluator and researcher by formal training and practice. She has had years of university-level teaching and evaluation experience in both public and private education, particularly in the fields of education, psychology, social work and public health. For over 20 years, she has worked in culturally diverse settings focusing on issues pertaining to underserved populations, class, race, gender, and culture.
Evaluations of advocacy, community organizing and other policy change efforts
present unique challenges for evaluators. These challenges include identifying
appropriate short-term outcomes that connect to longer-term policy goals,
determining evaluation methods, and integrating evaluation into advocates’
everyday work. Despite these challenges, evaluators are developing ways to
understand, define, and measure progress and results of policy change efforts.
Through presentation, discussion, and demonstration, the presenters will walk
you through case studies that offer examples of strategies and tools used in
real-world evaluations such as The Annie E. Casey Foundation’s KIDS COUNT
Initiative. Small group discussion will allow you to learn what strategies other
participants have used in their evaluation. You will leave the workshop with
resources to help evaluate future advocacy and policy change efforts.
An experienced team of facilitators from Organizational Research Services (ORS), Innovation Network and The Annie E. Casey Foundation will lead the session. The facilitators have developed several capacity-building resource materials to support advocacy evaluation, including A Guide to Measuring Advocacy and Policy (ORS for Casey, 2007) and Speaking for Themselves: Advocates’ Perspectives on Evaluation (Innovation Network for Casey and The Atlantic Philanthropies, 2008). Additionally, facilitators have substantial experience designing and conducting advocacy evaluations for a variety of organizations.
Session 18: Advocacy Evaluation
Prerequisites:
Familiarity with the field of advocacy evaluation
is helpful but not necessary.
Scheduled: Wednesday, November 11, 8:00 AM to
3:00 PM
Level: Intermediate
19.
Using Effect Size and Association Measures
Answer
the call to report effect size and association measures as part of your
evaluation results. Improve your capacity to understand and apply a range of
measures including: standardized measures of effect sizes from Cohen, Glass,
and Hedges; Eta-squared; Omega-squared; the Intraclass correlation
coefficient; and Cramer’s V. Together we will explore how to select the best
measures, how to perform the needed calculations, and how to analyze,
interpret, and report on the output in ways that strengthen your overall
evaluation
Through mini-lecture, hands-on exercises, and computer-based demonstration,
you will improve your understanding of the theoretical foundation and
computational procedures for each measure as well as ways to identify and
correct for bias.
You
will learn:
Jack Barnette is Professor of Biostatistics at the University of Colorado School of Public Health. He has taught courses in statistical methods, program evaluation, and survey methodology for more than 30 years. He has been conducting research and writing on this topic for more than ten years. Jack is a regular facilitator both at AEA's annual conference and the CDC/AEA Summer Evaluation Institute. He was awarded the Outstanding Commitment to Teaching Award by the University of Alabama and is a member of the ASPH/Pfizer Academy of Distinguished Public Health Teachers.
Session 19:
Effect Size Measures
Prerequisites: Univariate statistics through ANOVA and understanding
of and use of confidence levels.
Scheduled: Wednesday, November 11, 8:00
AM to 3:00 PM
Level: Intermediate
20. Collaborative Evaluations: A Step-by-Step Model for the Evaluator
Do you want to engage and succeed in collaborative evaluations? Using clear and simple language, the presenter will outline key concepts and effective tools/methods to help master the mechanics of collaboration in the evaluation environment. Building on a theoretical grounding, you will explore how to apply the Model for Collaborative Evaluations (MCE) to real-life evaluations, with a special emphasis on those factors that facilitate and inhibit stakeholders' participation.
Using highly interactive discussion, demonstration, hands-on exercises and small group work, each section addresses fundamental factors contributing to the six model components that must be mastered in order to succeed in collaborations. You will gain a deep understanding of how to develop collaborative relationships in the evaluation context.
Liliana Rodriguez-Campos is the Program Chair of the Collaborative, Participatory & Empowerment TIG and a faculty member in Evaluation at the University of South Florida. An experienced facilitator, with consistently outstanding reviews from AEA attendees, she has developed and offered training in both English and Spanish to a variety of audiences in the US and internationally.
21.
How to Prepare an Evaluation Dissertation Proposal
Developing an acceptable dissertation proposal often seems more difficult
than conducting the actual research. Further, proposing an evaluation as a
dissertation study can raise faculty concerns of acceptability and
feasibility. This workshop will lead you through a step-by-step process for
preparing a strong, effective dissertation proposal with special emphasis on
the evaluation dissertation.
The workshop will cover such topics as the nature, structure, and multiple
functions of the dissertation proposal; how to construct a compelling
argument; how to develop an effective problem statement and methods section;
and how to provide the necessary assurances to get the proposal approved.
Practical procedures and review criteria will be provided for each step. The
workshop will emphasize application of the knowledge and skills taught to
the participants’ personal dissertation situation through the use of an
annotated case example, multiple self-assessment worksheets, and several
opportunities for questions of personal application.
You
will learn:
Nick L Smith
is the co-author of How to Prepare a Dissertation Proposal (Syracuse
University Press) and a past-president of AEA. He has taught research and
evaluation courses for over 20 years at Syracuse University and is an
experienced workshop presenter. He has served as a dissertation advisor to
multiple students and is the primary architect of the curriculum and
dissertation requirements in his department.
Session 21:
Evaluation Dissertation
Scheduled:
Wednesday, November 11,
8:00
AM to 3:00 PM
Level: Beginner, no prerequisites
Visualizing
raw data and empirical findings is a crucial part of most evaluation work.
Data graphs are often used to present information in reports, presentations,
scientific papers, and newspapers. Yet, visualization of data is often
conducted without recognizing that there are theoretically derived
guidelines that help create graphical excellence. Basic principles can help
you avoid ‘chart junk.’ Knowing essentials to data graphs will improve the
quality of your evaluation products and contribute to better practice in
evaluation.
Following
an introductory lecture on the theoretical principles of graphical
excellence, you will gain hands-on experience creating graphs from data
sets. Live demonstrations with selected software (MS Excel and Tableu, and
exploratory discussions will illustrate the key tenets of effective charts
and graphs.
You will learn:
Frederic Malter is a senior research specialist at the University of Arizona. He has established guidelines and improved graphical displays for multiple organizations including University of Arizona and the Arizona Department of Health Services. Malter was a 2008 AEA Conference presenter and received overwhelming requests to provide a more in-depth workshop for Evaluation 2009.
23. Building Participatory
Democratic Evaluation Into Community Engagement
Needs
assessment (NA) is often assigned to evaluators with the assumption that
they have been trained in this form of assessment. Yet surveys of evaluation
training indicate that only a small number of courses on needs assessment
were or are being taught. Needs assessment topics such as the process of NA,
how to get the assessment started, analyzing and presenting data, and
prioritizing needs are key capacities for many effective evaluations.
This
daylong workshop will consist of four mini-workshops exploring different
aspects of NA. Each mini-workshop will include hands-on exercises around
case studies, discussion, and explanation. Much of the content provided
parallels a Needs Assessment Kit, newly developed by the facilitators.
You will learn:
James Altschuld is a well-known author and trainer of needs assessment. Jeffry White specializes in needs based surveys. Both are experienced presenters and are co-authors of a new needs assessment kit to be published by Sage in the Fall of 2009.
Learn
the theory-driven approach for assessing and improving program planning,
implementation and effectiveness. You will explore the conceptual framework
of program theory and its structure, which facilitates precise communication
between evaluators and stakeholders regarding evaluation needs and
approaches to addressing those needs. From there, the workshop moves to how
program theory and theory-driven evaluation are useful in the assessment and
improvement of a program at each stage throughout its life-cycle.
Mini-lectures, group exercises and case studies will illustrate the use of
program theory and theory-driven evaluation for program planning, initial
implementation, mature implementation and outcomes. In the outcome stages,
you will explore the differences among outcome monitoring, efficacy
evaluation and effectiveness evaluation.
You
will learn:
Huey Chen,
a Senior Evaluation Scientist at the Centers for Disease Control and
Prevention and 1993 recipient of the AEA Lazarsfeld Award for contributions
to evaluation theory, is the author of Theory-Driven Evaluations
(SAGE), the classic text for understanding program theory and theory-driven
evaluation and more recently of Practical Program Evaluation (2005).
He is an internationally know workshop facilitator on the subject.
26: Developing
Performance Measurement Systems
Program funders are increasingly emphasizing the importance of evaluation, often through performance measurement. This includes using high quality project objectives and performance measures as well as logic models to demonstrate the relationship between project activities and program outcomes. These steps help in the development of sound evaluation design thus allowing for the collection of higher quality and more meaningful data.
Building on mini-lectures, exercises, and group discussion, you will gain experience working with a performance measurement framework that can be used to evaluate program outcomes for single and multi-site locations as well as locally and federally funded projects. You will participate in hands-on activities ranging from constructing logic models and identifying project objectives, to identifying high quality performance measures.
You
will learn:
Courtney Brown and Mindy Hightower King have extensive experience developing and implementing performance measurement systems at local, state, and national levels. In addition to directing and managing evaluations for a myriad of organizations, they provide ongoing technical assistance and training to the United States Department of Education grantees in order to strengthen performance measurement.
27. Evaluating
Organizational Collaboration
“Collaboration” is a ubiquitous, yet misunderstood, under-empiricized and
un-operationalized construct. Program and organizational stakeholders
looking to do and be collaborative struggle to identify, practice and
evaluate it with efficacy. This workshop will explore how
the principles of collaboration theory can inform evaluation practice.
You will have the opportunity to increase your capacity to quantitatively
and qualitatively examine the development of inter- and intra-organizational
partnerships. Together, we will examine assessment strategies and specific
tools for data collection, analysis and reporting. We will practice using
assessment techniques that are currently being used in the evaluation of
PreK-16 educational reform initiatives predicated on organizational
collaboration (professional learning communities), as well as other
grant-sponsored endeavors, including the federally funded Safe
School/Healthy Student initiative.
You
will learn:
Rebecca Gajda
has facilitated workshops and courses for adult learners for more than 10
years and is on the faculty at the University of Massachusetts - Amherst.
Her most recent publication on the topic of organizational collaboration may
be found in the March 2007 issue of The American Journal of Evaluation.
Dr. Gajda notes, “I
love creating learning opportunities in which all participants learn, find
the material useful, and have fun at the same time.”
28.
Transformative Mixed Methods Evaluations
This workshop
focuses on the methodological and contextual considerations in designing and
conducting transformative mixed methods evaluation. It is geared to meet the
needs of evaluators working in communities that reflect diversity in terms
of culture, race/ethnicity, religion, language, gender, and disability.
Deficit perspectives that are taken as common wisdom can have a deleterious
effect on both the design of a program and the outcomes of that program. A
transformative mixed methods approach enhances an evaluator's ability to
accurately represent how this can happen.
Interactive
activities based upon case studies will give you an opportunity to apply
theoretical guidance that will be provided in a plenary session, a
mini-lecture and small- and large-group discussions. Alternative strategies
based on transformative mixed methods are illustrated through reference to
the presenters' own work, the work of others, and the challenges that
participants bring to the workshop.
You
will learn:
Donna Mertens
is a Past President of the American Evaluation Association who teaches
evaluation methods and program evaluation to deaf and hearing graudate
students at Gallaudet University in Washington, D.C. Mertens recently
authored
Transformative Research and Evaluation
(Guilford).
Katrina L
Bledsoe
is a senior research associate at Walter R. McDonald & Associates, conducting and managing
evaluations in culturally complex communities nationally.
29. Utilization-focused Evaluation
Evaluations should be useful, practical, accurate and ethical.
Utilization-focused Evaluation is a process that meets these expectations
and promotes use of evaluation from beginning to end. With a focus on
carefully targeting and implementing evaluations for increased utility, this
approach encourages situational responsiveness, adaptability and creativity.
This training is aimed at building capacity to think strategically about
evaluation and increase commitment to conducting high quality and useful
evaluations.
Utlization-focused evaluation focuses on the intended users of the
evaluation in the context of situational responsiveness with the goal of
methodological appropriateness. An appropriate match between users and
methods should result in an evaluation that is useful, practical, accurate,
and ethical, the characteristics of high quality evaluations according to
the profession's standards. With an overall goal of teaching you the process
of Utilization-focused Evaluation, the session will combine lectures with
concrete examples and interactive case analyses.
You will learn:
Michael Quinn Patton
is an independent consultant and professor at the Union Institute. An
internationally known expert on Utilization-focused Evaluation, this
workshop is based on the newly completed fourth edition of his best-selling
evaluation text, Utilization Focused Evaluation: The New Century Text
(SAGE).
Session 29: Utilization-focused
Evaluation
Scheduled: Wednesday, November 11, 8:00
AM to 3:00 PM
Level: Beginner, no prerequisites
30. Introduction to Multilevel Models in Program and Policy Evaluation
Multilevel models (also called hierarchical linear models) open the door to understanding the inter-relationships among nested structures (students in classrooms in schools in districts for instance), or the ways evaluands change across time (perhaps longitudinal examinations of health interventions). This workshop will demystify multilevel models and present them at an accessible level, stressing their practical applications in evaluation.
Through discussion and hands-on demonstrations, the workshop will address four key questions: When are multilevel models necessary? How can they be implemented using standard software? How does one interpret multilevel results? What are recent developments in this arena?
You will learn:
Sanjeev Sridharan of the University of Toronto has repeatedly taught multilevel models for AEA as well as for the SPSS software company. His recent work on this topic has been published in the Journal of Substance Abuse Treatment, Proceedings of the American Statistical Association and Social Indicators Research. Known for making the complex understandable, his approach to the topic is straightforward and accessible.
Session 30: Multilevel Models
Prerequisites: Basic statistics including some
understanding of measures of dispersion, tests of significance, and
regression.
Scheduled: Wednesday, November 11, 8:00
AM to 3:00 PM
Level: Intermediate
31.
Evaluating Complex System Interventions
Are you uncertain about how to
evaluate system-change interventions and interventions implemented in
complex systems? Join us to explore ways to select and apply evaluation
methods based on the dynamics of complex adaptive systems. Learn how three
types of human system dynamics – organized, self-organized, and unorganized
– can be used individually or in combination to design and implement
evaluations of complex system interventions.
Following an introductory
lecture on a systems perspective in evaluation, we’ll work in small groups
to examine a real-world case study that illustrates how we can apply the
concepts discussed. Following the case study, you will have the opportunity
to assess and gather feedback on how to apply multiple system dynamics to
your own situations.
You will learn:
Session 31: Evaluating Complex
System Interventions
Prerequisites: Knowledge of, or experience in, conducting
or planning evaluations in complex social systems. Knowledge of multiple
theories of evaluation.
Scheduled: Wednesday, November 11, 8:00
AM to 3:00 PM
Level: Intermediate
32. Using Evaluation in Government: The Politics of Evaluation
Since 1993 federal and state governments have increasingly used evaluation to strengthen accountability and support policy and budget decision-making. This has resulted in federal level evaluation-related policies and practices that can effect governments at the state and local levels. In addition to these policies, the use of evaluation to improve government policies and the quality, efficiency, and value of government programs and services can be supported or inhibited by political considerations.
Through a series of mini-lectures and group discussion, you will examine policies and practices related to the planning, design, and use of evaluation in government as well as the political-career interface and politics of evaluation.
You will learn:
Joe Wholey is widely recognized for his evaluation work within government. Throughout his many roles he has advised the Office of Management and Budget, executive branch agencies, and the United States Government Accountability Office on the Government Performance and Results Act of 1993 and Program Assessment Rating Tool. He supplements this knowledge with his own evaluation experience as a local official and board member of several organizations.
Session 32: Politics of Evaluation
Scheduled: Wednesday, November 11, 8:00
AM to 3:00 PM
Level: Beginner, no prerequisites
33. Advanced Topics in Concept Mapping for Evaluation
Concept mapping, as a mixed method approach, is a well-known and widely used tool to enhance evaluation design, implementation, and measurement. It allows us to systematically synthesize large statement sets and represent ideas in a series of easy-to-read graphics. Building off of the introductory concept mapping training at Evaluation 2008, this intermediate level training focuses on developing advanced skills for analysis, production of results, and utilization of results in a planning and evaluation framework.
Through the use of mini-lectures and small group exercises, you will work through the process of concept mapping including synthesizing a large statement set, choosing the best fitting cluster solution, and producing results that increase the likelihood of utilization.
You will learn:
Mary Kane will lead a team of facilitators from Concept Systems, Inc, a consulting company that uses the concept mapping methodology as a primary tool in its planning and evaluation consulting projects. The presenters have extensive experience with concept mapping and are among the world's leading experts on this approach.
Session 33: Concept Mapping
Prerequisites:
Completion of Concept
Mapping training or previous experience using Concept Mapping methods and
tools
Scheduled: Wednesday, November 11, 8:00
AM to 3:00 PM
Level: Intermediate
34. Using Stories in Evaluation (new expanded format!)
Stories are an effective means of communicating the ways in which individuals are influenced by educational, health, and human service agencies and programs. Unfortunately, the story has been undervalued and largely ignored as a research and reporting procedure. Stories are sometimes regarded with suspicion because of the haphazard manner in which they are captured or the cavalier description of what the story depicts. These stories, or short descriptive accounts, have considerable potential for providing insight, increasing understanding of how a program operates or in communicating evaluation results.
Through short lecture, discussion, demonstration, and hands-on activities, this workshop explores effective strategies for discovering, collecting, analyzing and reporting stories that illustrate program processes, benefits, strengths or weaknesses. Expanded from a half-day workshop based on participant feedback, you will leave prepared to integrate stories into your evaluation tool kit.
You will learn:
Richard Krueger is a senior fellow at the University of Minnesota, a past-president of AEA, and an active listener for evaluation stories for over a decade. He has offered well-received professional development workshops at AEA and for non-profit and government audiences for over 15 years and taught graduate level courses on using stories for research and evaluation.
Session 34: Using Stories
Prerequisites: Familiarity with basic qualitative
research skills including interviewing, and analysis of qualitative
data
Scheduled: Wednesday,
November 11, 8:00
AM to 3:00 PM
Level: Intermediate
35. Reconceptualizing Evaluation
Evaluation as a practical process has moved along its own path for millenia. The fights over its status in academe (but not in medicine or engineering, the essentially practical subjects) or over its proper operation within the community of evaluators have not affected its development into improved performance, eg., in the Government Accountability Office or Consumers Union or quality control in manufacturing.
Through large and small group discussion, we will examine a more external perspective on evaluation by reviewing a series of models of it as an entity in logical space, in the taxonomy of knowledge, in neural economics and AI/robotic design, and in biological and social evolution. These will be argued for the view of evaluation as the alpha discipline, since all other disciplines use it to run their own shops and the way they do this needs improvements that evaluators regard as standard requirements in their field.
You will learn:
Michael Scriven is renowned scholar of evaluation theory and recipient of the American Evaluation Association's Lazarsfeld Award for his contributions to the field. He was the first president of the Evaluation Network and the 1999 president of the American Evaluation Association. Scriven is a noted author with over 100 publications in the field.
Session 35: Reconceptualizing
Evaluation
Prerequisites: An interest in evaluation and particularly
an interest for those who are working professionals in the field
Scheduled: Wednesday, November 11, 8:00
AM to 3:00 PM
Level: Intermediate
36. Ethical and Inclusive Excellence Imperatives in a Globalizing World: An Integral Evaluator Model
Evaluative judgments are inextricably bound up with culture and context. We will use an Integral Evaluator Quadrant model as a holistic self-assessment framework for ethical praxis and inclusive excellence. It will enhance multilateral self-awareness, in the context of other aspects of the evaluator role, through explicitly representing the intersection of two dimensions: (individual vs. collective vantage points) X (interior vs. exterior environments). Walking around the quadrants will facilitate our systematic exploration of lenses, filters and frames vis-a-vis judgment-making within relevant situational, relational, temporal and spatial/geographic contexts.
Together, we will cultivate a deliberative forum for exploring issues using micro-level assessment processes that will help attendees to more mindfully explore the uses of self as knower, inquirer and engager of others within as well as across salient diversity divides. This work will be framed as a life-long personal homework journey that will help us honor the sacred trust embodied in the privileged authority role of evaluator as impactful judgment-maker. We'll employ mini-lectures interspersed with private brainstorming and dyad/small-group/full-community discussions.
You will learn:
Hazel Symonette
brings over 30 years of work in diversity-related arenas and currently
serves as a senior policy/planning analyst at the University of
Wisconsin-Madison. She designed, and has offered annually, the Institute on
Program Assessment for over 10 years. Her passion lies in expanding the
cadre of practitioners who embrace end-to-end evaluative thinking/praxis
within their program design and development efforts.
Session 36: Excellence Imperatives
Scheduled: Wednesday, November 11, 8:00
AM to 3:00 PM
Level: Beginner, no prerequisites
Let’s explore ways in which context shapes evaluation, and the challenges and strategies that can be used to identify, clarify and assimilate contextual factors to achieve greater value and meaning in evaluation work. You will leave the workshop with a broader set of strategies for 'uncovering' context to make your evaluations more accurate, useful, and exciting. Participants will also deepen their appreciation for the technology of participation and how it can help enhance evaluation processes.
In this highly participatory workshop, you will apply tools and processes that clarify the context of evaluation. You will be offered an overview and bibliography on each of three topics: (1) systems thinking, (2) organizational development, and (3) appreciative inquiry; a practice activity for each tool presented; and several real-life case examples of how this tool was applied and what was gained in each evaluation from that application.
You will learn:
Tessie Catsambas and Mary Guttman are experienced trainers in the field of evaluation. Catsambas specializes in evaluation, training, and knowledge management and is co-author of Reframing Evaluation Through Appreciative Inquiry (Sage, 2006). Guttman is an evaluation specialist with a major focus on evaluation capacity and participatory processes.
Session 37: Uncovering Context
Scheduled: Wednesday, November 11, 8:00
AM to 3:00 PM
Level: Beginner, no prerequisites
Half Day Workshops, Wednesday, November 11, 8 AM to 11 AM
38. Impact and Value: Telling Your Program's Success Story
Success stories are relevant to the practice of evaluation and are increasingly used to communicate with stakeholders about a program's achievements. They are an effective way for prevention programs to highlight program progress as these programs are often unable to demonstrate outcomes for several years. Therefore, communicating success during program development and implementation is important for building program momentum and sustainability.
Using a workbook developed by the Centers for Disease Control and Prevention's Division of Oral Health entitled: Impact and value: Telling your program's story, this session will focus on using success stories to throughout the program's life cycle. This practical and hands on workshop will: define success stories, discuss types of success stories, and describe methods for systematically collecting and using success stories to promote your program and influence policy decisions. Attendees will create a 10-second sound-bite story and begin to draft a success story.
You will learn:
Ann Webb Price and Rene Lavinghouze are co-authors of Impact and Value: Telling Your Program's Success Story, a workbook written for the CDC's Division of Oral Health (DOH). Price is the lead for the Division of Oral Health's Success Story data collection project for its 13 oral health grantees. Lavinghouze is the lead evaluator for DOH and supervises this and all other division evaluation projects.
Session 38: Success Stories
Scheduled: Wednesday, November 11, 8:00
AM to 11:00 AM
Level: Beginner, no prerequisites
Geographic Information Systems (GIS) and spatial analysis concepts can be used to enhance program and policy evaluation. This workshop will introduce the basic steps of undertaking a mapping project plus challenges of doing computer mapping cost effectively. A variety of mapping software (some free or low cost) and how to gather data from map contents from the World Wide Web, from client files and during evaluations will be reviewed.
Using case studies from projects or programs in several content areas (e.g., ecology, community-based programs, public health, housing, and criminal justice), we will practice spatial analysis and use GIS approaches to help create and test logic models and evaluate programs. We will also discuss ways to involve community stakeholders and improve the visual quality of map presentations for program and policy decision-making.
You will learn:
Stephen Maack and Arlene Hopkins bring university-level teaching experience, workshop facilitation experience, and both a class-based and practical background in using GIS for evaluation, to their workshop presentation.
Session 39: Introduction to GIS
Scheduled: Wednesday, November 11,
8:00
AM to 11:00 AM
Level: Beginner, no prerequisites
40.
An Executive Summary is Not Enough: Effective Reporting Techniques
for Evaluators
As an evaluator you are conscientious about conducting the best evaluation possible, but how much thought do you give to communicating your results effectively? Do you consider your job complete after submitting a lengthy final report? Reporting is an important skill for evaluators who care about seeing their results disseminated widely and recommendations actually implemented.
Drawing on current research, this interactive workshop will present an overview of three key principles of effective reporting and engage participants in a discussion of its role in effective evaluation. Participants will leave with an expanded repertoire of innovative reporting techniques and will have the opportunity to work on a real example in groups.
You will learn:
Kylie Hutchinson has served since 2005 as the trainer for the Canadian Evaluation Society's Essential Skills Series (ESS) in British Columbia. Her interest in dissemination and communications stems from twenty years of experience in the field of evaluation.
Session 40: Effective Reporting
Scheduled: Wednesday, November 11, 8:00
AM to 11:00 AM
Level: Beginner, no prerequisites
41. Using
Systems Tools to Evaluate Multi-Site Programs
Coordinating evaluation efforts
of large multi-site programs requires specific skills from evaluators. One
of these skills is the ability to connect multi-site evaluations with
overall program objectives. This skill can be acquired by learning to use
diagramming tools that show function, feedback loops, force fields, and
leverage points for priority decisions.
Through individual and small group work, we will
examine a multi-site evaluation system and gain hands-on experience applying
diagramming tools. You will draw a program system and consider its value to
program goals and objectives. After reviewing and summarizing these systems,
we will discuss questions intended to exemplify the sciences of intentional
learning, behavioral change, systems thinking and practice, and assessment
as functional systems of evaluation and accountability.
You will learn:
Molly Engle has taught, participated in, and designed numerous multi-site evaluations from a systems context over the past 10 years and is a past AEA President. Andrea Hegedus applies system level evaluation in the area of racial and ethnic health disparities in her work at at the Center for Disease Control and Prevention.
Session 41: Systems Tools for
Mulit-site
Scheduled: Wednesday, November 11, 8:00
AM to 11:00 AM
Level: Beginner, no prerequisites
42. Dominators, Cynics, and Wallflowers: Practical Strategies for Moderating Meaningful Focus Groups
Focus groups are a great way to gather input from stakeholders in an evaluation. Unfortunately, the behavior of the participants in a focus group can be a challenge. Some participants can be dominators and need to be curbed so they do not contaminate the discussion. Other participants can be cynics or wallflowers that also impede the quality of the discussion.
Through small group discussion, role play, and lecture, you will be able to identify and address ten problem behaviors that commonly occur in focus groups and practical strategies to prevent, manage, and leverage these behaviors. You will be able to share your own experiences with focus groups and gather helpful tactics that others have used.
You will learn:
Robert Kahle is the author of Dominators, Cynics, and Wallflowers: Practical Strategies for Moderating Meaningful Focus Groups (Paramount Market Publishing, 2007). Kahle has been working on the issue of managing difficult behavior in focus groups and other small group settings since the mid 1990s.
Session 42: Moderating Focus
Groups
Prerequisites:
Understanding of
qualitative methods and small group dynamics. Experience moderating focus
groups.
Scheduled: Wednesday, November 11, 8:00
AM to 11:00 AM
Level:
Intermediate
Half Day Workshops, Wednesday, November 11, 12 PM to 3 PM
43.
Applied Cost-Effectiveness and Cost-Benefit Analysis
As decision-makers are charged with doing more for clients and taxpayers with dwindling private and public resources, evaluators face an increasing need to measure not just effectiveness but also cost-effectiveness. Because cost-effectiveness analysis (CEA) must start from the determination of effectiveness, an efficient approach is for evaluators to add measures of costs to their planned studies, thus allowing CEA (and if effects are monetizable, cost-benefit analysis or CBA) to be performed.
Lecture interspersed with hands-on exercises will provide a strong conceptual foundation for the proper application of CEA and CBA and tools for cost and benefit assessment. You will gain a better understanding of core concepts, perspective of the analysis and its implications, and the identification and measurement of appropriate costs, effectiveness, and benefits so that the cost-effectiveness and cost-benefit of alternative programs can be compared and optimized.
You will learn:
Patricia Herman and Brian Yates both have years of experience teaching cost-effectiveness and cost-benefit analysis and have published numerous resources on the topics. They presented a shorter version of this workshop at the 2008 AEA conference and are expanding it to workshop-length in order to provide a more in-depth exploration.
Session 43: Cost-Effectiveness
Scheduled: Wednesday, November 11,
12:00 PM to 3:00 PM
Level: Beginner, no prerequisites
44. The Growing Field of Energy Program Evaluation and Where the
Opportunities Lie
The field of energy program evaluation has a 30-year history
and is growing rapidly as a result of the need to develop policies and
programs to respond to climate change. Evaluators new to this field need to
understand the terms and issues that frame evaluation of energy programs if
they are to be successful.
Through lecture and discussion, we will review the history of energy
program evaluation and key issues specific to this type of evaluation. We
will also discuss key entry points for engagement in the field of energy
program evaluation.
You will learn:
Session 44: Energy Program
Evaluation
Scheduled: Wednesday, November 11,
12:00 PM to 3:00 PM
Level: Beginner, no prerequisites
45. Advanced Applications of Program Theory
While
simple logic models are an adequate way to gain clarity and initial
understanding about a program, sound program theory can enhance
understanding of the underlying logic of the program by providing a
disciplined way to state and test assumptions about how program activities
are expected to lead to program outcomes.
Lecture, exercises, discussion, and peer-critique will help you to develop
and use program theory as a basis for decisions about measurement and
evaluation methods, to disentangle the success or failure of a program from
the validity of its conceptual model, and to facilitate the participation
and engagement of diverse stakeholder groups.
You
will learn:
Stewart Donaldson is Dean of the School of Behavioral and Organizational Sciences at Claremont Graduate University. He has published widely on the topic of applying program theory, developed one of the largest university-based evaluation training programs, and has conducted theory-driven evaluations for more than 100 organizations during the past decade.
Session 45: Advanced Program Theory
Prerequisites: Experience or Training in Logic Models
Scheduled: Wednesday, November 11, 12:00 PM to 3:00 PM
Level: Intermediate![]()
Empowerment Evaluation builds program capacity and fosters program
improvement. It teaches people to help themselves by learning how to
evaluate their own programs. The basic steps of empowerment evaluation
include: 1) establishing a mission or unifying purpose for a group or
program; 2) taking stock - creating a baseline to measure future growth and
improvement; and 3) planning for the future - establishing goals and
strategies to achieve goals, as well as credible evidence to monitor change.
The role of the evaluator is that of coach or facilitator in an empowerment
evaluation, since the group is in charge of the evaluation itself.
Employing lecture, activities, demonstration and case examples ranging from
townships in South Africa to a $15 million Hewlett-Packard Digital Village
project, the workshop will introduce you to the steps of empowerment
evaluation and tools to facilitate the approach. You
will join participants in conducting an assessment, using empowerment
evaluation steps and techniques.
You
will learn:
David Fetterman
hails from Stanford University and is the editor of (and a contributor to)
the recently published Empowerment Evaluation Principles in Practice
(Guilford). He Chairs the Collaborative, Participatory and Empowerment
Evaluation AEA Topical Interest Group and is a highly experienced and sought
after facilitator.
Session 46:
Empowerment Evaluation
Scheduled: Wednesday, November 11, 12:00 PM to 3:00 PM
Level:
Beginner, no prerequisites
47.
Nonparametric Statistics: What to Do When Your Data is Skewed or Your Sample
Size is Small
So many of us have
encountered situations where we simply did not end up with the robust,
bell-shaped data set we thought we would have to analyze. In these cases,
traditional statistical methods lose their power and are no longer
appropriate. This workshop provides a brief overview of parametric
statistics in order to contrast them with non-parametric statistics.
Different data situations which require non-parametric statistics will be
reviewed and appropriate techniques will be demonstrated step by step.
This workshop will combine a
classroom style with group work. The instructor will use a laptop to
demonstrate how to run the non-parametric statistics in SPSS. You are encouraged to
e-mail the facilitator prior to the conference with your specific data
questions which may then be chosen for problem-solving in the workshop.
You will learn:
Jennifer Camacho
Catrambone
uses statistics as part of her work at the Ruth M Rothstein CORE Center. She regularly teaches informal courses on the
use of non-parametric statistics in the evaluation of small programs and
enjoys doing independent evaluative and statistical consulting.
Session 47:
Nonparametric Statistics
Scheduled:
Wednesday, November 11, 12:00 PM to 3:00 PM
Level:
Beginner,
no prerequisites![]()
Half Day Workshops, Sunday, November 15, 9 AM to 12 PM
48. Creating Surveys to Measure Performance and Assess Needs
Surveys for program evaluation, performance measurement, or needs assessment can provide excellent information for evaluators. However, developing effective surveys requires an eye both to unbiased question design as well as to how the results of the survey will be used. Neglecting these two aspects impacts the success of the survey.
This hands-on workshop will use lecture and group exercises to review guidelines for survey development. We will use two national surveys, one used for measuring the performance of local governments and the other to assess the needs of older adults, to inform the creation of our own survey instruments.
You will learn:
Michelle Kobayashi is co-author of Citizen Surveys: a comprehensive guide to making them matter (International City/County Management Association, 2009). She has over 25 years of experience in performance measurement and needs assessment, and has conducted scores of workshops on research and evaluation methods for community based organizations, local government employees, elected officials and students.
Session 48:
Creating Surveys
Scheduled: Sunday, November 15, 9:00 AM to 12:00 PM
Level:
Beginner,
no prerequisites![]()
49.
Conflict Resolution Skills for Evaluators
Unacknowledged and unresolved conflict can challenge even the most skilled
evaluators. Conflict between evaluators and clients and among stakeholders
create barriers to successful completion of the evaluation project. This
workshop will delve into ways to improve listening, problem solving,
communication and facilitation skills and introduce a streamlined process of
conflict resolution that may be used with clients and stakeholders.
Through a hands-on, experiential approach using real-life examples from
program evaluation, you will become skilled at the practical applications of
conflict resolution as they apply to situations in program evaluation. You
will have the opportunity to assess your own approach to handling conflict
and to build on that assessment to improve your conflict resolution skills.
You
will learn:
Jeanne Zimmer
has
served as Executive Director of the Dispute Resolution Center since 2001 and
is completing a doctorate in evaluation studies with a minor in conflict
management at the University of Minnesota. For over a decade, she has been a
very well-received professional trainer in conflict resolution and
communications skills.
Session 49:
Conflict Resolution
Scheduled: Sunday, November 15, 9:00 AM to 12:00 PM
Level:
Beginner,
no prerequisites![]()
50.
Advanced Focus Group Moderator Training
The
literature is rich in textbooks and case studies on many aspects of focus
groups including design, implementation and analyses. Missing however, are
guidelines and discussions on how to moderate a focus group.
In
this experiential learning environment, you will find out how to maximize
time, build rapport, create energy and apply communication tools in a focus
group to maintain the flow of discussion among the participants and elicit
more than one-person answers.
Using
practical exercises and examples, including role play and constructive
peer-critique as a focus group leader or respondent, you will explore
effective focus group moderation including ways to increase and limit
responses among individuals and the group as a whole. In addition, many of
the strategies presented in the workshop are applicable more broadly, in
other evaluation settings such as community forums and committee meetings to
stimulate and sustain discussion.
You
will learn:
Nancy-Ellen Kiernan
has facilitated over 200 workshops on evaluation methodology and moderated
focus groups in over fifty studies with groups ranging from Amish dairy farmers in
barns to at-risk teens in youth centers, to university faculty. On the faculty at Penn State University,
she has published widely and is a regular
workshop presenter at AEA’s annual conference.
Session 50:
Focus Group
Moderator Training
Prerequisites: Having moderated 2 focus groups and written focus
group questions and probes
Scheduled:
Sunday, November 15, 9:00 AM to 12:00 PM
Level: Intermediate![]()
51. Systems, Systems Thinking, and Systemness: What's It All About, Anyway?
As interest in systems and the evaluation of systems grows, evaluators often find themselves struggling to understand what is really meant by terms such as 'systems theory' or 'systems thinking'. And what about all those other 'system' terms evaluators encounter - such as 'systemness', 'systems of care', and 'systems of service delivery'? What are the distinctions between these terms and how does understanding this impact us as evaluators? What are the strengths and limitations of each of these perspectives? What does this mean in the overall context of systems in evaluation?
In this workshop, we will consider commonly accepted definitions of these paradigms and be challenged to explore what these different perspectives bring to understanding issues pertaining to systems, evaluation, and context. Through mini-lectures, group activities, and discussions, we'll pursue the ultimate objective of challenging our thinking about systems and encouraging 'stretching' our evaluative thinking concerning systems, systems thinking, and systemness.
You
will learn:
Jan Noga is the owner of Pathfinder Evaluation and Consulting, which provides consulting, evaluation, and assessment services in education and the human services. Meg Hargreaves is a senior health researcher at Mathematica Policy Research, where she works on evaluations of complex systems change initiatives. Both are experienced facilitators.
Session 51:
Systems Thinking
Scheduled: Sunday, November 15, 9:00 AM to 12:00 PM
Level:
Beginner,
no prerequisites![]()
While program theory has become increasingly popular over the past 10 to 20 years, guides for developing and using logic models sometimes sacrifice contextual difference of practice in the interests of clear guidance and consistency across organizations. This session is designed for advanced evaluators who are seeking to explore ways to develop and use program theory in ways that suit the particular characteristics of the intervention, the evaluation purpose and the organizational environment.
In addition to challenges identified by participants, the workshop will use mini-lectures, exercises, and discussions to address three particularly important issues- improving the quality of the models by drawing on generic theories of change and program archetypes; balancing the tension between simple models which communicate clearly and complicated models which better represent reality; and using the model to develop and answer evaluation questions that go beyond simply meeting targets.
You
will learn:
Patricia Rogers is an experienced facilitator and evaluator, and one of the leading authors in the area of program Theory. She has taught for The Evaluators Institute and is on faculty at the Royal Melbourne Institute of Technology.
Session 52:
Purposeful Program Theory
Prerequisites: Knowledge of and experience developing and using logic
models and program theory for monitoring and evaluation
Scheduled: Sunday, November 15, 9:00 AM to 12:00 PM
Level: Advanced