Professional Development Workshops are hands-on, interactive sessions that provide an opportunity to learn new skills or hone existing ones at Evaluation 2008.
Professional development workshops precede and follow the conference. They differ from sessions offered during the conference itself in at least three ways: 1. each is longer (either 3, 6, or 12 hours in length) and thus provides a more in-depth exploration of a skill or area of knowledge, 2. presenters are paid for their time and are expected to have significant experience both presenting and in the subject area, and 3. attendees pay separately for these workshops and are given the opportunity to evaluate the experience. Sessions are filled on a first-come, first-served basis and most are likely to fill before the conference begins.
Registration for professional development workshops is handled as part of the conference registration forms; however, you may register for professional development workshops even if you are not attending the conference itself (still using the regular conference registration forms - just uncheck the conference registration box).
Workshop registration fees are in addition to the fees for conference registration:
|
Two Day Workshop |
One Day Workshop |
Half Day Workshop |
|
| AEA Members | $300 | $150 | $75 |
| Full-time Students | $160 | $80 | $40 |
| Non-Members | $400 | $200 | $100 |
Sessions that are closed because they have reached their maximum attendance will be clearly marked below the session name. No further registrations will be accepted for full sessions and we do not maintain waiting lists. Once sessions are closed, they will not be re-opened.
(1) Qualitative Methods; (2) Quantitative Methods; (3) Evaluation 101; (4) Logic Models; (5) Participatory Evaluation; (7) Survey Design; (8) Building Evaluation Capacity; (9) Communicating and Reporting
(10) Racism in Evaluation; (11) Evaluating Capacity Development; (12) Consulting Contracts; (13) Multiple Regression; (14) Collaborative Evaluations; (15) Immigrant Communities; (16) Longitudinal Analysis; (17) Effects of Interventions;
(18) Organizational
Collaboration;
(19) Theory-Driven Evaluation;
(20) Utilization Focused;
(21) RealWorld Evaluation;
(22)
Evaluation Dissertation;
(23) Experimental
Design; (24)
Transformative Mixed Methods Evaluations;
(25) Human Systems Dynamics Theory;
(26) Needs Assessment;
(27) Mixed Methods;
(28) Cultivating Self;
(29) Marketing Your
Evaluation Business;
(30)
Environmental and Resource Programs;
(31)
Building an Evaluation System for Your Organization;
(32)
Concept Mapping;
(33)
Logic Models
- Beyond the Traditional View;
(34)
Interviewing & Content Analysis;
(35)
Effect Size Measures;
(36) Grantwriting
Half Day Workshops, Wednesday, November 5, 12 PM to 3 PM
(45) Capacity Building; (46) Empowerment Evaluation; (47) Knowledge Transfer; (48) Building Evaluation Capacity Through Appreciative Inquiry; (49) Handling Data; (50) Using Stories; (51) Qualitative Analysis Software; (52) Adv Program Theory
Qualitative data can humanize evaluations by portraying people and stories behind the numbers. Qualitative inquiry involves using in-depth interviews, focus groups, observational methods, and case studies to provide rich descriptions of processes, people, and programs. When combined with participatory and collaborative approaches, qualitative methods are especially appropriate for capacity-building-oriented evaluations.
Through lecture, discussion, and small-group practice, this workshop will help you to choose among qualitative methods and implement those methods in ways that are credible, useful, and rigorous. It will culminate with a discussion of new directions in qualitative evaluation.
Michael Quinn Patton is an independent consultant and professor at the Union Institute. An internationally known expert on utilization-focused evaluation and qualitative methods, he published the third edition of Qualitative Research and Evaluation Methods (SAGE) in 2001.
Quantitative data offers opportunities for numerical descriptions of populations and samples. The challenge is in knowing which analyses are best for a given situation. Designed for the practitioner needing a refresher course and/or guidance in applying quantitative methods to evaluation contexts, the workshop covers the basics of parametric and nonparametric statistics, as well as how to report your findings.
Hands-on exercises and computer demonstrations interspersed with mini-lectures will introduce methods and concepts. The instructor will review examples of research and evaluation questions and the statistical methods appropriate to developing a quantitative data-based response.
Katherine McKnight applies quantitative analysis as Director of Program Evaluation for Pearson Achievement Solutions and is co-author of Missing Data: A Gentle Introduction (Guilford, 2007). Additionally, she teaches Research Methods, Statistics, and Measurement in Public and International Affairs at George Mason University in Fairfax, Virginia.
3. Evaluation 101: Intro to Evaluation Practice
Begin at the beginning and learn the basics of evaluation from an expert trainer. The session will focus on the logic of evaluation to answer the key question: "What resources are transformed into what program evaluation strategies to produce what outputs for which evaluation audiences, to serve what purposes." Enhance your skills in planning, conducting, monitoring, and modifying the evaluation so that it generates the information needed to improve program results and communicate program performance to key stakeholder groups.
A case-driven instructional process, using discussion, exercises, and lecture will introduce the steps in conducting useful evaluations: Getting started, Describing the program, Identifying evaluation questions, Collecting data, Analyzing and reporting, and Using results.
John McLaughlin has been part of the evaluation community for over 30 years working in the public, private, and non-profit sectors. He has presented this workshop in multiple venues and will tailor this two-day format for Evaluation 2008.
4. Logic Models for Program Evaluation and Planning
Many programs fail to start with a clear description of the program and its intended outcomes, undermining both program planning and evaluation efforts. The logic model, as a map of what a program is and intends to do, is a useful tool for clarifying objectives, improving the relationship between activities and those objectives, and developing and integrating evaluation plans and strategic plans.
First, we will recapture the utility of program logic modeling as a simple discipline, using cases in public health and human services to explore the steps for constructing, refining and validating models. Then, we'll examine how to improve logic models using some fundamental principles of "program theory", demonstrate how to use logic models effectively to help frame questions in program evaluation, and show some ways logic models can also inform strategic planning. Both days use modules with presentations, small group case studies, and debriefs to reinforce group work.
To use logic models to answer strategic planning.
Thomas Chapel is the central resource person for planning and program evaluation at the Centers for Disease Control and Prevention and a sought after trainer. Tom has taught this workshop for the past four years to much acclaim.
Participatory evaluation practice requires evaluators to be skilled facilitators of interpersonal interactions. This workshop will provide you with theoretical grounding (social interdependence theory, conflict theory, and evaluation use theory) and practical frameworks for analyzing and extending your own practice.
Through presentations, discussion, reflection, and case study, you will experience strategies to enhance participatory evaluation and foster interaction. You are encouraged to bring examples of challenges faced in your practice for discussion to this workshop consistently lauded for its ready applicability to real world evaluation contexts.
Jean King has over 30 years of experience as an award-winning teacher at the University of Minnesota. As an evaluation practitioner, she has received AEA’s Myrdal award for outstanding evaluation practice. Laurie Stevahn is a professor at Seattle University with extensive facilitation experience as well as applied experience in participatory evaluation.
7. Survey Design and Administration
A standout from the 2006 and program, this workshop has been updated and expanded to a two-day offering. Designed for true beginners with little or no background in survey development, you will be introduced to the fundamentals of survey design and administration, and leave with tools for developing and improving your own surveys as part of your evaluation practice.
This interactive workshop will use a combination of direct instruction with hands-on opportunities for participants to apply what is learned to their own evaluation projects. We will explore different types of surveys, how to identify the domains included in surveys, how to choose the right one, how to administer the survey and how to increase response rates and quality of data. You will receive handouts with sample surveys, item writing tips, checklists, and resource lists for further information.
Courtney Malloy and Harold Urman are consultants at Vital Research, a research and evaluation firm that specializes in survey design. They both have extensive experience facilitating workshops and training sessions on research and evaluation for diverse audiences.
8. Building Evaluation Capacity Within Community Organizations
New extended format! Are you working with community groups (coalitions, nonprofits, social service agencies, local health departments, volunteers, school boards) that are trying to evaluate the outcomes of their work to meet a funding requirement, an organizational expectation, or to enhance their own program performance?
Join us in this highly interactive workshop where you will practice and reflect on a variety of activities and adult learning techniques associated with three components of evaluation planning: focus, data collection, and communicating. Try these activities out, assess their appropriateness for your own situation, and expand your toolbox. We will draw from a compendium of practical tools and strategies that we have developed over the past years and have found useful in our own work. We encourage you to bring your own ‘best practices' to share as we work towards building the evaluation capacity of communities.
Ellen Taylor-Powell is widely recognized for her work in evaluation capacity building. Her 20 years in Extension have continuously focused on evaluation training and capacity building with concentration on individual, team, and organizational learning. She will lead a team of four facilitators with extensive experience both in teaching adults and in working with community groups and agencies.
9. Evaluation Strategies for Communicating and Reporting
New extended format! Communicating evaluation processes and results is one of the most critical aspects of evaluation practice. Yet, evaluators continually experience frustration with hours spent on writing reports that are seldom read or shared. While final reports will continue to be an expectation of many evaluation contracts, there are other ways in which evaluators can communicate and report on the progress and findings from an evaluation.
Using hands-on demonstrations and real-world examples, we will explore how a variety of strategies for communicating and reporting can increase learning from the evaluation’s findings, stakeholders’ understanding of evaluation processes, the evaluation’s credibility, and action on the evaluation’s recommendations.
Rosalie T Torres is president of Torres Consulting Group, a research, evaluation and management consulting firm specializing in the feedback-based development of programs and organizations since 1992. She has authored/co-authored numerous books and articles including, Evaluation Strategies for Communicating and Reporting (Torres, Preskill, & Piontek, SAGE, 2005), and Evaluative Inquiry for Learning in Organizations (Preskill & Torres, SAGE, 1999).
10. Identifying, Measuring and Interpreting Racism in Evaluation Efforts
Historically, racism has been a contributing factor to the racial disparities that persist across contemporary society. This workshop will help you to identify, frame, and measure racism's presence. The workshop includes strategies for removing racism from various evaluation processes, as well as ways for identifying types of racism that may be influencing the contexts in which racial disparities- and other societal programs operate.
Through mini-lectures, discussion, small group exercises, and handouts, learners will practice at applying workshop content to real society problems such as identifying racial biases that may be embedded in research literature, identifying the influence of racism in the contexts of racial disparities programs, and eliminating inadvertent racism that may become embedded in cross-cultural research.
Pauline Brooks is an evaluator and researcher by formal training and practice. She has had years of university-level teaching and evaluation experience in both public and private education, particularly in the fields of education, psychology, social work and public health. For over 20 years, she has worked in culturally diverse settings focusing on issues pertaining to underserved populations, class, race, gender, and culture.
11. Evaluating Capacity Development
This workshop is full. There are no more spaces
available for this workshop and we do not maintain waiting lists. Please
select an alternative workshop in which to enroll.
New for 2008! Capacity development is a 'plastic' term that can be stretched to fit almost anything done in social development programs. This workshop provides basic concepts and frameworks that participants can use to better understand capacity development, and practical guidance for planning, designing and implementing evaluations of capacity development efforts that meet the needs of intended users.
Drawing on the facilitator's experience in international development programs, the workshop will engage you in discussions and group exercises that clarify the meaning of capacity development and identify the main types of capacity and the levels at which it can be developed and assessed. Frameworks, principles and practical tips will be presented for assessing organizational capacity and performance, and for focusing, designing and implementing evaluations of capacity development efforts.
You will learn:
Douglas Horton is an independent evaluator and management consultant specializing in international research and development programs. For over 20 years, he has facilitated workshops on topics related to the management and evaluation of development programs around the world.
12: Navigating the Waters of Evaluation Consulting Contracts
New for 2008! Are you looking for a compass, or maybe just a little wind to send you in the right direction when it comes to your consulting contracts? Have you experienced a stormy contract relationship and seek calm waters?
This workshop combines mini lecture, discussion, skills practice, and group work to address evaluation contract issues. You will learn about important contractual considerations such as deliverables, timelines, confidentiality clauses, rights to use/ownership, budget, client and evaluator responsibilities, protocol, data storage and use, pricing, contract negotiation, and more. Common mistakes and omissions, as well as ways to navigate through these will be covered. You will receive examples of the items discussed, as well as resources informing the contract process. You are encouraged to bring topics for discussion or specific questions.
Kristin Huff is a seasoned evaluator and facilitator with over 15 years of training experience and a Master of Science degree in Experiential Education. Ms. Huff has managed consulting contracts covering the fields of technology, fundraising, nonprofit management, and evaluation, and has developed and managed more than 400 consulting contracts in the past seven years.
13. Applications of Multiple Regression in Evaluation: Mediation, Moderation, and More
This workshop is canceled due to a family emergency.
14. Collaborative Evaluations: A Step-by-Step Model for the Evaluator
Do you want to engage and succeed in collaborative evaluations? Using clear and simple language, the presenter will outline key concepts and effective tools/methods to help master the mechanics of collaboration in the evaluation environment. Building on a theoretical grounding, you will explore how to apply the Model for Collaborative Evaluations (MCE) to real-life evaluations, with a special emphasis on those factors that facilitate and inhibit stakeholders' participation.
Using discussion, demonstration, hands-on exercises and small group work, each section deals with fundamental factors contributing to the six model components that must be mastered in order to succeed in collaborations. You will gain a deep understanding of how to develop collaborative relationships in the evaluation context.
Liliana Rodriguez-Campos is the Program Chair of the Collaborative, Participatory & Empowerment TIG and a faculty member in Evaluation at the University of South Florida. An experienced facilitator, she has developed and offered training in both English and Spanish to a variety of audiences in the US and internationally.
This workshop will attend to the unique issues of conducting evaluations in immigrant communities and other cultures within which evaluation is generally unknown, unfamiliar or unwelcome. We will examine such issues as non-verbal communication; entry, access, trust and relationship building; stakeholder participation; identifying culturally specific outcomes and indicators; language, translation and culturally specific constructs; taboo and sensitive topics; adaptation of qualitative and quantitative methods; instrument development and surveying; sampling; and culturally specific reporting and information sharing.
Drawing on case examples from the facilitator's extensive experience in immigrant and other cultural communities and the experience of participants, we will illustrate what has and hasn't worked well, principles of good practice, and the learning opportunities for all involved. Through lecture, discussions and exercises you will explore the challenges of cross-cultural evaluation and approaches to responding to them.
Barry Cohen and Mia Robillos of Rainbow Research have extensive experience in conductive evaluation within immigrant and other cultural communities including working with Southeast Asian immigrants, Hmong opiate users, and Latino and Vietnamese religious leaders. They have facilitated workshops and presentations on related topics for the past five years.
This workshop is full. There are no more spaces
available for this workshop and we do not maintain waiting lists. Please
select an alternative workshop in which to enroll.
Many
evaluation studies make use of longitudinal data. However, while much can be
learned from repeated measures, the analysis of change is also associated
with a number of special problems. This workshop reviews how traditional
methods in the analysis of change, such as the paired t-test, and repeated
measures ANOVA or MANOVA, address these problems. We will examine underlying
assumptions and explore how structural equation models can help to improve
our analyses. The core of the workshop will be an introduction to latent
growth curve modeling (LGM) and how to specify, estimate, and interpret
growth curve models. We will conclude with a discussion of recent advances
in LGM.
A
mixture of Power Point presentation, group discussion, and exercises with a
special focus on model specification will help us to explore LGM in contrast
to more traditional approaches to analyzing change. We will demonstrate
processes for setting up and estimating models using different software
packages, and a number of practical examples along with sample output will
be used to illustrate the material. You will receive all slides as handouts
as well as recommendations for further exploration.
You
will learn:
Manuel C Voelkle
is a research associate at the University of Mannheim where he teaches
courses on multivariate data analysis and research design and methods.
Werner W Wittmann is professor of psychology at the University of
Mannheim, where he heads a research and teaching unit specializing in
research methods, assessment and evaluation research.
Session
16:
Longitudinal Analysis
Prerequisites: Solid understanding of structural equation models and
regression analytic techniques. Experience with analyzing longitudinal data
is useful but not necessary.
Scheduled: Tuesday, November 4, 9:00 AM to 4:00 PM
Level: Intermediate
17. How to Estimate the Effects of Interventions
This workshop is full. There are no more spaces
available for this workshop and we do not maintain waiting lists. Please
select an alternative workshop in which to enroll.
This workshop explores how to design program evaluations so as to produce the most credible estimates of treatment effects that can be obtained under the many imposing constraints of field settings. We will distinguish among eight elementary types of comparisons for estimating treatment effects: four types of randomized experiments and four prototypical quasi-experimental designs. From there, we will identify and discuss the myriad design embellishments that can be applied to each of the elementary types of comparisons, so as to tailor the design to best fit the demands of your unique evaluation setting.
Together, we will discuss the relative strengths and weaknesses of the
various types of comparisons and design options, and then present and
critique examples of designs. Finally, we will share theoretical advances in
research design (including the principle of design parallelism and the
theory of design elaboration) to highlight the cutting edge in this area.
You will learn:
Chip Reichardt
is a professor of psychology at the University of Denver. He has helped
design program evaluations to assess treatment effects in a variety of
educational and health related fields. And he has enjoyed the many
challenges of analyzing data that are produced by these designs.
Session 17:
Effects of Interventions
Prerequisites: Basics of evaluation design and practical experience
designing evaluations
Scheduled: Tuesday, November 4, 9:00 AM to 4:00 PM
Level: Intermediate
18. Evaluating
Organizational Collaboration
“Collaboration” is a ubiquitous, yet misunderstood, under-empiricized and
un-operationalized construct. Program and organizational stakeholders
looking to do and be collaborative struggle to identify, practice and
evaluate it with efficacy. This workshop will explore how
the principles of collaboration theory can inform evaluation practice.
You will have the opportunity to increase your capacity to quantitatively
and qualitatively examine the development of inter- and intra-organizational
partnerships. Together, we will examine assessment strategies and specific
tools for data collection, analysis and reporting. We will practice using
assessment techniques that are currently being used in the evaluation of
PreK-16 educational reform initiatives predicated on organizational
collaboration (professional learning communities), as well as other
grant-sponsored endeavors, including the federally funded Safe
School/Healthy Student initiative.
You
will learn:
Rebecca Gajda
has facilitated workshops and courses for adult learners for more than 10
years and is on the faculty at the University of Massachusetts - Amherst.
Her most recent publication on the topic of organizational collaboration may
be found in the March 2007 issue of The American Journal of Evaluation.
Dr. Gajda notes, “I
love creating learning opportunities in which all participants learn, find
the material useful, and have fun at the same time.”
Session 18:
Organizational Collaboration
Prerequisites: Basic understanding of organizational change
theory/systems theory, familiarity with mixed methodological designs, group
facilitation and participation skills.
Scheduled: Wednesday, November 5, 8:00 AM to 3:00 PM
Level: Intermediate
Learn
the theory-driven approach for assessing and improving program planning,
implementation and effectiveness. You will explore the conceptual framework
of program theory and its structure, which facilitates precise communication
between evaluators and stakeholders regarding evaluation needs and
approaches to addressing those needs. From there, the workshop moves to how
program theory and theory-driven evaluation are useful in the assessment and
improvement of a program at each stage throughout its life-cycle.
Mini-lectures, group exercises and case studies will illustrate the use of
program theory and theory-driven evaluation for program planning, initial
implementation, mature implementation and outcomes. In the outcome stages,
you will explore the differences among outcome monitoring, efficacy
evaluation and effectiveness evaluation.
You
will learn:
Huey Chen,
a Senior Evaluation Scientist at the Centers for Disease Control and
Prevention and 1993 recipient of the AEA Lazarsfeld Award for contributions
to evaluation theory, is the author of Theory-Driven Evaluations
(SAGE), the classic text for understanding program theory and theory-driven
evaluation and more recently of Practical Program Evaluation (2005).
He is an internationally know workshop facilitator on the subject.
Session 19:
Theory-Driven Evaluation
Prerequisites: Basic background in evaluation.
Scheduled: Wednesday, November 5, 8:00 AM to 3:00 PM
Level: Intermediate
20. Utilization-focused Evaluation
Evaluations should be useful, practical, accurate and ethical.
Utilization-focused Evaluation is a process that meets these expectations
and promotes use of evaluation from beginning to end. With a focus on
carefully targeting and implementing evaluations for increased utility, this
approach encourages situational responsiveness, adaptability and creativity.
This training is aimed at building capacity to think strategically about
evaluation and increase commitment to conducting high quality and useful
evaluations.
Utlization-focused evaluation focuses on the intended users of the
evaluation in the context of situational responsiveness with the goal of
methodological appropriateness. An appropriate match between users and
methods should result in an evaluation that is useful, practical, accurate,
and ethical, the characteristics of high quality evaluations according to
the profession's standards. With an overall goal of teaching you the process
of Utilization-focused Evaluation, the session will combine lectures with
concrete examples and interactive case analyses.
You will learn:
Michael Quinn Patton
is an independent consultant and professor at the Union Institute. An
internationally known expert on Utilization-focused Evaluation, this
workshop is based on the newly completed fourth edition of his best-selling
evaluation text, Utilization Focused Evaluation: The New Century Text
(SAGE).
Session 20:
Utilization-focused
Scheduled: Wednesday, November 5, 8:00 AM to 3:00 PM
Level:
Beginner, no prerequisites
This workshop is full. There are no more spaces
available for this workshop and we do not maintain waiting lists. Please
select an alternative workshop in which to enroll.
Have you had the experience of being asked to perform an evaluation on a
project that was almost finished, there was no baseline, and there can't be
a comparison group, yet your clients expected 'rigorous impact evaluation'?
Not only that, but as you were negotiating the terms of reference you
discovered that there is a short deadline and a rather limited budget for
conducting the evaluation? Have you had to deal with political pressures,
including pre-conceived expectations by stakeholders?
This workshop presents a seven-step process, a checklist and a toolbox of techniques that seek to help evaluators and clients ensure the best quality evaluation under real-life constraints like those described above. The RealWorld Evaluation approach will be introduced and its practical utility assessed through presentations and discussion, and through examples drawn from the experiences of presenters and participants. You will learn and share practical techniques for dealing with real-world constraints.
You will learn:
Jim
Rugh
and Michael Bamberger recently co-authored, with Linda Mabry, the
book Real World Evaluation, Working Under Time, Data and Political
Constraints (SAGE 2006). The two presenters bring over eighty years of
professional evaluation experience, mostly in developing countries around
the world.
Session 21: RealWorld Evaluation
Prerequisites: Academic or practical knowledge of the basics of
evaluation.
Scheduled: Wednesday, November 5, 8:00 AM to 3:00 PM
Level: Intermediate
22.
How to Prepare an Evaluation Dissertation Proposal
Developing an acceptable dissertation proposal often seems more difficult
than conducting the actual research. Further, proposing an evaluation as a
dissertation study can raise faculty concerns of acceptability and
feasibility. This workshop will lead you through a step-by-step process for
preparing a strong, effective dissertation proposal with special emphasis on
the evaluation dissertation.
The workshop will cover such topics as the nature, structure, and multiple
functions of the dissertation proposal; how to construct a compelling
argument; how to develop an effective problem statement and methods section;
and how to provide the necessary assurances to get the proposal approved.
Practical procedures and review criteria will be provided for each step. The
workshop will emphasize application of the knowledge and skills taught to
the participants’ personal dissertation situation through the use of an
annotated case example, multiple self-assessment worksheets, and several
opportunities for questions of personal application.
You
will learn:
Nick L Smith
is the co-author of How to Prepare a Dissertation Proposal (Syracuse
University Press) and a past-president of AEA. He has taught research and
evaluation courses for over 20 years at Syracuse University and is an
experienced workshop presenter. He has served as a dissertation advisor to
multiple students and is the primary architect of the curriculum and
dissertation requirements in his department.
Session 22:
Evaluation Dissertation
Scheduled: Wednesday, November 5, 8:00 AM to 3:00 PM
Level: Beginner, no prerequisites
23.
Managing Experimental Designs in Evaluation
Evaluators and administrators are increasingly expected to conduct studies
using what are called scientifically-based methods. This workshop will
provide you with the knowledge and ability to design and implement both
random assignment experiments and alternative rigorous designs that can
satisfy demands for scientifically-based methods.
With
an emphasis on hands-on exercises and individual consultation within the
group setting, this workshop will provide you with concrete skills to
improve your current or anticipated work with experimental design studies.
You
will learn:
George Julnes,
Associate Professor of Psychology at Utah State University, has been
contributing to evaluation theory for over 15 years and has been working
with federal agencies, including the Social Security Administration, on the
design and implementation of randomized field trials. Fred Newman is
a Professor at Florida International University with over thirty years of
experience in performing front line program evaluation studies.
Session 23:
Experimental Design
Prerequisites: Understanding of threats to validity and the research
designs used to minimize them, practical experience with evaluation helpful.
Scheduled: Wednesday, November 5, 8:00 AM to 3:00 PM
Level: Intermediate
24.
Transformative Mixed Methods Evaluations
This workshop
focuses on the methodological and contextual considerations in designing and
conducting transformative mixed methods evaluation. It is geared to meet the
needs of evaluators working in communities that reflect diversity in terms
of culture, race/ethnicity, religion, language, gender, and disability.
Deficit perspectives that are taken as common wisdom can have a deleterious
effect on both the design of a program and the outcomes of that program. A
transformative mixed methods approach enhances an evaluator's ability to
accurately represent how this can happen.
Interactive
activities based upon case studies will give you an opportunity to apply
theoretical guidance that will be provided in a plenary session, a
mini-lecture and small- and large-group discussions. Alternative strategies
based on transformative mixed methods are illustrated through reference to
the presenters' own work, the work of others, and the challenges that
participants bring to the workshop.
You
will learn:
Donna Mertens
is a Past President of the American Evaluation Association who teaches
evaluation methods and program evaluation to deaf and hearing graudate
students at Gallaudet University in Washington, D.C. Mertens recently
authored
Transformative Research and Evaluation
(Guilford).
Katrina L
Bledsoe
is a consultant in the greater Washington DC area, conducting and managing
evaluations in culturally complex communities nationally.
Session 24:
Transformative Mixed Methods
Prerequisites:
Academic
training in evaluation and at least a year of experience in the field
Scheduled:
Wednesday, November 5, 8:00 AM to 3:00 PM
Level:
Intermediate
25.
Human Systems Dynamics Theory Applied to Evaluation Practice
Are you uncertain
about what system concepts apply to your evaluation work? We will explore
three types of human system dynamics (HSD) - organized, self-organizing, and
unorganized – that provide a basis for differentially designing your
evaluation of complex situations.
This session will
provide a framework for understanding the nature of human systems dynamics
and the kinds of evaluative questions that arise out of an understanding of
these system dynamics. A case study will be used throughout the workshop to
apply concepts that are discussed in mini-lectures and small group work.
You will learn:
Beverly Parsons
is Executive Director of InSites in Colorado and has more than 20 years of
experience in evaluating education and social service initiatives. She is
the author of
Evaluative Inquiry: How
Evaluation can Promote Student Success
(Corwin Press).
Margaret Hargreaves
is a Senior Associate at Abt Associates in Cambridge, Mass. She developed
the evaluation plan and is site lead for the CMS System Transformation Grant
Evaluation, a five-year initiative to transform the U.S. long-term care
system.
Session 25:
Human Systems Dynamics Theory
Prerequisites:
Knowledge of/experience in conducting or planning evaluations in complex
social systems.
Scheduled:
Wednesday, November 5, 8:00 AM to 3:00 PM
Level:
Intermediate
26.
Introduction to Needs Assessment and Designing Needs Assessment Surveys
Assessing needs is a task often assigned to evaluators with the assumption
that they have been trained in or have experience with the activity. Yet,
many evaluators have little experience or understanding of the tenets of
needs assessment and how to collect quality needs assessment data.
This
workshop uses multiple hands-on activities interspersed with
mini-presentations and discussions to provide an overview of needs
assessment with a strong emphasis on designing needs assessment surveys. The
focus will be on basic terms and concepts, models of needs assessment, steps
necessary to conduct a needs assessment, and an overview of methods with
particular focus on the design and nature of needs assessment surveys.
You
will learn:
James Altschuld
is a well known author and trainer in the area of needs assessment and was a
pioneer in offering academic training in needs assessment to evaluators. His
recent publications include co-authorship of the text From Needs
Assessment to Action: Transforming Needs in Solution Strategies (SAGE
2000).
Session 26:
Needs Assessment
Scheduled: Wednesday, November 5, 8:00 AM to 3:00 PM
Level: Beginner, no prerequisite
27.
Mixed Methods for Program Evaluation
This
workshop will guide you through selecting and applying quantitative,
qualitative, and mixed data-analytic techniques for use in program
evaluation activities. From formulating goals and objectives and developing
research questions, to identifying your sampling fame and collecting data,
we will ground you in the basics of mixed-methods research. From there, we
will move to a more in-depth exploration of mixed-method data analysis and
reporting considerations.
This
interactive session, appropriate for new and seasoned program evaluators,
will provide frameworks and heuristics for selecting and applying
data-analytic techniques and validating, interpreting, and reporting results
of mixed research studies. In addition, the presenters will provide an array
of publishing tips and approaches for applying Standards and Guidelines when
reporting results and writing the mixed-research report in program
evaluation.
You will learn:
Anthony J.
Onwuegbuzie and John Slate
from Sam Houston State University, Kathleen M. T. Collins
from the University of Arkansas, and Nancy Leech from the
University of Colorado Denver comprise the team presenting the mixed
research workshop. Members of the team have presented mixed research
workshops to audiences world wide.
Session 27:
Mixed Methods
Scheduled: Wednesday, November 5, 8:00 AM to 3:00 PM
Level: Beginner, no prerequisites
28.
Lenses, Filters, Frames: Cultivating Self as Responsive Instrument
Evaluative judgments are inextricably bound up with culture and context and
call for diversity-grounded, multilateral self-awareness. Excellence and
ethical practice in evaluation are intertwined with orientations toward,
responsiveness to, and capacities for engaging diversity. Breathing life
into this expectation calls for critical ongoing personal homework for
evaluators regarding their lenses, filters and frames vis-a-vis
judgment-making.
Together, we will cultivate a deliberative forum for exploring these issues using micro-level assessment processes that will help attendees to explore mindfully the uses of self as knower, inquirer and engager of others within as well as across salient diversity divides. We often look but still do not see, listen but do not hear, touch but do not feel. Evaluators have a professional and ethical responsibility to address the ways our lenses, filters and frames may obscure or distort more than they illuminate.
You
will learn:
Hazel Symonette
brings over 30 years of work in diversity-related arenas and currently
serves as a senior policy/planning analyst at the University of
Wisconsin-Madison. She designed, and has offered annually, the Institute on
Program Assessment for over 10 years. Her passion lies in expanding the
cadre of practitioners who embrace end-to-end evaluative thinking/praxis
within their program design and development efforts.
Session 28:
Cultivating Self
Scheduled: Wednesday, November 5, 8:00 AM to 3:00 PM
Level: Beginner, no prerequisites
29.
Marketing Your Evaluation Business
This workshop outlines
a logical approach to finding your business product/service niche and offers
innovative sales tips to help distinguish your practice. We will coach you
on assessing your strengths and weaknesses against emerging opportunities
and threats in today's marketplace. The focus will be on harnessing your
passion, profitably and sustainability.
In short group and
solo exercises you will learn the fine art of fully costing out and
competitively pricing your services to realize a target profit. The workshop
ends by demonstrating to participants what it takes to enhance the
future-value of your evaluation practice thereby enabling you to either sell
it profitably or successfully transfer it to your family, partner or
employees. This full day workshop will be highly interactive and use
numerous real-life situations for analysis and recommendations for ways to
proactively and deliberately grow.
You will learn:
Melanie Hwalek
is founder and president of SPEC Associates in Detroit, Michigan. In 2006,
she co-authored “Building Your Evaluation Business into a Valuable Asset” in
New Directions in Evaluation.
Eric Abdullateef has provided consulting to
small and medium-sized firms in both California and Hawaii for over 20
years. He recently moved to Washington, D.C. to pursue his own evaluation
practice focused on business enabling development programs and policies.
Session 29:
Marketing Your Evaluation Business
Prerequisites:
Already running or have conscientiously decided to run a small business or
serve as a self-employed evaluation contractor
Scheduled:
Wednesday, November 5, 8:00 AM to 3:00 PM
Level:
Intermediate
30.
Evaluating Environmental and Resource Programs
This workshop provides an
introduction to the key concepts and techniques for evaluating environmental
and resource programs. After identifying the distinguishing characteristics,
participants will be introduced to techniques to address the unique
challenges and promoting use of environmental and resource evaluation.
The workshop will
include introductory and concept and skill development sessions. The
introductory session will use both tutorials and small group and
brainstorming sessions.
Tutorials will provide introductions to the field illustrated with the
presenters' experience. In small group sessions, participants will identify
the key distinguishing characteristics of evaluation in resource and
environmental settings, compared to other evaluation settings.
You will learn:
John A McLaughlin
is an
independent consultant (McLaughlin Associates in Lanexa, Virginia) in
strategic planning, performance measurement, and program evaluation. For the
last several years, he has been a consultant to the U.S. Environmental
Protection Agency.
Andy Rowe
is head of ARCeconomics
and has been conducting evaluations in resource and environmental settings
for 28 years in Canada, the U.S., the western Pacific, Asia and Europe. His
evaluation designs are currently used in most federal environmental agencies
in evaluating the effects of environmental and resossurce decisions.
Session 30:
Environmental and Resource Programs
Prerequisites:
Basic knowledge of evaluation methods and approaches
Scheduled:
Wednesday, November 5, 8:00 AM to 3:00 PM
Level:
Intermediate
31.
Building an Evaluation System for Your Organization
This workshop is full. There are no more spaces available for this workshop and we do not maintain waiting lists. Please select an alternative workshop in which to enroll.
Increasingly,
non-profit and other organizations are in need of integrated evaluations
systems to support ongoing learning, change, improvement, and
accountability. While individually commissioned external program evaluations
are sometimes needed to meet funder requirements, ongoing integrated
evaluation can best contribute to organizational success. This hands-on
workshop will give you the tools you need to build an evaluation system for
your organization.
Instructional
methods for the workshop include mini-lectures, small and large group
discussion, and individual application using handouts and tools. Prior to
the workshop participants will be contacted with instructions for gathering
information about their organization's mission/strategic plan and any
existing means of evaluation/data collection.
You will learn:
Rosalie Torres
is the founder of Torres Consulting Group, a research, evaluation, and
management consulting firm in Alameda, California. She has conducted and
supported evaluations within organizations for the past 30 years, and has
co-authored two books on this topic.
Jill Casey
has worked with Torres
Consulting Group over the past 12 years to design and implement evaluations
within educational and non-profit organizations.
Session 31:
Building an Evaluation System for Your Organization
Scheduled:
Wednesday, November 5, 8:00 AM to 3:00 PM
Level:
Intermediate
32.
Concept Mapping for Evaluation: A Mixed Methods, Participatory Approach
Concept mapping is a well-tested, mixed methods methodology that integrates
familiar qualitative group processes with multivariate statistical analyses
to help a group describe and organize its thinking on a topic. Ideas are
represented visually in a series of easy-to-read graphics that capture
specific ideas generated by a group; relationships between ideas; how ideas
cluster together; and how those ideas are valued
Through applications of structured group concept mapping, we will introduce key principles of stakeholder participation in evaluation. We will illustrate the steps in the methodology with project examples, with a particular focus on the planning stages of a project, as the decisions at this stage are applicable to any participatory project. A secondary focus will be on the unique analyses that create a shared conceptual framework for complex, systems-based issues and how to represent that in easy-to-read visuals.
You
will learn:
Mary Kane,
is President, and Kathleen Quinlan and Scott Rosas, are Senior
Consultants, at Concept Systems, Inc, a consulting company that uses the
concept mapping methodology as a primary tool in its planning and evaluation
consulting projects.
Each of the presenters has facilitated large scale national, state and/or
local projects using the concept mapping methodology, published related to
the methodology and is among the world's leading experts on this approach
Session 32:
Concept Mapping
Scheduled: Wednesday, November 5, 8:00 AM to 3:00 PM
Level: Beginner, no prerequisites
33.
Logic Models - Beyond the Traditional View
This workshop will
present two broad topics that will increase the value of using logic models:
1) an expanded view of what forms logic models can take and 2)
epistemological issues in logic modeling.
We will employ
lectures, group discussions and break out sessions.
You will learn:
Jonathan A Morell
is a Senior Policy Analyst at TechTeam
Government
Solutions. He is Editor-in-Chief of the international journal
Evaluation and Program
Planning and
on the editorial boards of the
International Journal
of Electronic Business
and the
International Journal of Services Technology Management.
Session 33:
Logic Models
- Beyond the Traditional View
Prerequisites:
Basic evaluation experience and experience constructing logic models
Scheduled:
Wednesday, November 5, 8:00 AM to 3:00 PM
Level:
Intermediate
34.
Rigorous Interviewing Techniques and Content Analysis for Program Evaluation
In assessing the impact of programs and policies, it is important to
recognize that the quantitative methods, while enormously useful, have some
important limitations that can be overcome by incorporating qualitative
approaches. Rigorous interviewing and content analysis of qualitative data
can generate important information about the program’s effectiveness by
uncovering the stakeholders’ experiences and perceptions to capture the
overall meaning of a program to its participants, evaluate individualized
outcomes, and document program dynamics and variations among individuals or
different sites.
Through lecture, discussion, demonstration, and hands-on activities, this
workshop will walk participants through a variety of interviewing techniques
for use in program evaluation. We will also draw from specific case studies
to conduct hands-on content analysis of qualitative data
to generate credible findings addressing the relevant and priority
evaluation questions and issues.
You will
learn:
Danuta
Wojnar
teaches
at Seattle University CON. In the graduate programs she predominantly
teaches Concepts and Theories and Research Methodologies courses in which
students and faculty explore the epistemological assumptions underlying a
variety of scientific approaches, research designs, and practice
applications. Dr. Wojnar’s program of research focuses on program
development and evaluation to promote health care without stigmatization.
Session 34:
Interviewing & Content Analysis
Scheduled: Wednesday, November 5, 8:00 AM to 3:00 PM
Level: Beginner, no prerequisites
35.
Using Effect Size and Association Measures
Answer
the call to report effect size and association measures as part of your
evaluation results. Improve your capacity to understand and apply a range of
measures including: standardized measures of effect sizes from Cohen, Glass,
and Hedges; Eta-squared; Omega-squared; the Intraclass correlation
coefficient; and Cramer’s V. Together we will explore how to select the best
measures, how to perform the needed calculations, and how to analyze,
interpret, and report on the output in ways that strengthen your overall
evaluation
Through mini-lecture, hands-on exercises, and computer-based demonstration,
you will improve your understanding of the theoretical foundation and
computational procedures for each measure as well as ways to identify and
correct for bias.
You
will learn:
Jack Barnette
hails from The University of Alabama at Birmingham. He has been conducting
research and writing on this topic for over ten years. Jack has won awards
for outstanding teaching and is a regular facilitator both at AEA's annual
conference and the CDC/AEA Summer Evaluation Institute
Session 35:
Effect Size Measures
Prerequisites: Univariate statistics through ANOVA and understanding
of and use of confidence levels.
Scheduled: Wednesday, November 5, 8:00 AM to 3:00 PM
Level: Advanced
36. Grantwriting for Evaluators
Have you been called upon to write the evaluation portion of a grant
proposal or even a stand alone grant proposal to seek funding for an
evaluation research project? This detailed one-day workshop introduces
beginning grantwriters to the field of grantwriting. We’ll begin at the
beginning, identifying sources of grant funding as well as the ways in which
evaluation is a contributory component of strong comprehensive program
grants.
Using a hands-on approach, this workshop will provide you with the
information and skills you need to succeed at grantwriting. The course will
culminate with time set aside for question and answer with an experienced
evaluator and grantwriter who will provide tips and lessons learned.
You
will learn:
Joel Philp holds a doctorate in Psychology and has over 13 years of experience in the evaluation field. As Director of The Evaluation Group, Philp is responsible for the oversight of all evaluations and for developing sound evaluation plans to accompany selected grant applications. Anne Black has a master’s in Biometry and over 10 years of experience as a grant writer. She is the director of Grants Development at Research Associates and has taught grant writing workshops nationwide and co-authored Developing Successful Grants.
Session 36:
Grantwriting
Scheduled: Wednesday, November 5, 8:00 AM to 3:00 PM
Level: Beginner, no prerequisites
This workshop is designed to explore one of the most fundamental issues facing evaluators today – how to gather credible and actionable evidence in contemporary evaluation practice. Many thorny debates about what counts as evidence have occurred in recent years, but few have sorted out the issues in a way that directly informs evaluation practice.
Through mini-lectures, exercises, and discussions, you will come away from
this workshop with an understanding of the philosophical, theoretical,
methodological, political, and ethical dimensions of gathering credible and
actionable evidence. You will also learn about the strengths and limitations
of experimental and non-experimental approaches for gathering evidence for
evaluations. Finally, you will explore a step-by-step approach for gathering
credible and actionable evidence across a wide range of evaluation practice
problems and settings.
You
will learn:
Stewart I Donaldson
is
Dean of the School of Behavioral and Organizational Sciences at Claremont
Graduate University. He has published widely on the topic of evaluation,
developed one of the largest university-based evaluation training programs,
and has conducted evaluations for more than 100 organizations during the
past decade.
His recent work includes
What Counts as Credible Evidence in Applied Research and Evaluation
Practice?
Session 37:
Credible Evidence
Prerequisites: Basic understanding of evaluation.
Scheduled: Wednesday, November 5, 8:00 AM to 11:00 AM
Level: Intermediate
38.
Video Interviewing and Evaluation Inquiry
Recent developments
in video technology have made this a powerful and accessible tool for
qualitative inquiry and in a variety of fields (visual anthropology,
educational research, social and cultural studies). This session is designed
to provide an introduction to videography methods for practicing evaluators
who have experience in qualitative approaches to evaluation inquiry and are
interested in using videography as a means to data collection, exploring
analytic procedures for video data, and developing interactive approaches to
evaluation reporting.
The workshop
will be delivered through a
lecture, demonstration, discussion format. Basic points will be introduced,
with video samples and opportunities to use equipment and software.
Discussions will draw upon your own experiences as well as case study and
problem-solving activities.
You will learn:
William H
Rickards
is
Senior
Research Associate for Educational Research and Evaluation at Alverno
College in Milwaukee, Wisconsin. Videography work has become an important
part of his evaluation studies in college curriculum, student development
and, in exploring and understanding student and faculty experience in
diverse learning environments.
Session 38:
Video
Interviewing
Prerequisites:
Experience in basic evaluation practice and design
Scheduled:
Wednesday, November 5, 8:00 AM to 11:00 AM
Level:
Intermediate
39.
Fun and Games With Logframes: Participatory Strategies for Learning Logic
Models
In the international
development community, logic models (logframes) have become the industry
standard to summarize a project/program’s design and intended results. At
best, logframes are tools that help project design, monitoring, and
evaluation (DM&E). At worst they can straightjacket a project, imposing an
outside, technocentric method that alienates rather than fosters local
participation in project design, monitoring, and evaluation. This
skills-building workshop is aimed at helping to minimize the alienation of
local partners in the logframe process.
The workshop will
utilize participatory strategies where you learn not only from the
facilitator, but from small group(team) games with other participants.
Examples will be drawn from actual projects in the following sectors:
Water/Sanitation,
Disaster Management, Community Health, Livelihoods, Hygiene Promotion,
HIV/AIDS.
You will learn:
Scott Chaplowe
has more
than 10 years of experience in program development, management, and M&E with
the United Nations, international development NGOs, and U.S.-based nonprofit
and public organizations. Since September 2005, he has worked in Asia as a
regional technical advisor in M&E for the Tsunami Recovery Program (TRP) of
the American Red Cross (ARC) and the International Federation of Red Cross
and Red Crescent (IFRC).
Session 39:
Fun With
Logframes
Scheduled:
Wednesday, November 5, 8:00 AM to 11:00 AM
Level:
Beginner, no prerequisites
40.
Visual Presentation of Quantitative Data
This workshop is full. There are no more spaces
available for this workshop and we do not maintain waiting lists. Please
select an alternative workshop in which to enroll.
Presenting data
through graphics, rather than numbers, can be a powerful tool for
understanding data and disseminating findings. But when used improperly, it
can also confuse audiences, complicate research, and obscure findings. This
workshop will enable you to capitalize on the myriad benefits of visual
representation by providing them with tools for displaying data graphically,
be it for presentations, evaluation reports, publications, or continued
dialogue with program funders, personnel, and recipients
We will walk you
through the do's and dont's of visual representations of data, using group
exercises, novel visual aids, class discussions, lecture and practice data.
Once you have a firm grasp of the problems of presenting quantitative data
well, we will share easy methods for presenting information well that
capitalize on the way people naturally scan and process visual information.
Then you will be given the opportunity to try out your new knowledge through
graphing real data.
You will learn:
David Streiner
is a
Professor of Psychiatry at the University of Toronto and Assistant V.P. and
Research Director of the Kunin-Lunenfeld Applied Research Unit at the
Baycrest Centre for Geriatric Care.
Stephanie Reich
is an assistant professor at the University of California, Irvine and a
former research associate at the Center for Evaluation and Program
Improvement at Vanderbilt University.
Session 40:
Visual
Presentation of Quantitative Data
Scheduled:
Wednesday, November 5, 8:00 AM to 11:00 AM
Level: Beginner, no prerequisites
41.
New Developments in Focus Group Interviewing
Focus group interviews have evolved over the past 50 years. Early focus
groups began with white middle-class consumers in market research
environments and have now spread around the world. They are used with many
different audiences and are being conducted in person, on the telephone as
well as on the Internet.
Through short lecture, discussion, and demonstrations, we will explore
recent developments and examine the advantages and disadvantages of these
changes. Issues for consideration include respoding to changes in
technology, including the internet and cell-phones; and working with
alternative audiences including youth and difficult to access populations.
You will learn:
Richard Krueger
is a senior fellow at the University of Minnesota. In 30+ years of practice
he has conducted thousands of focus group interviews and he still gets
excited about listening to people. He is the author of 6 books on focus
group interviewing and is a past president of AEA.
Session 41:
New Focus Group
Scheduled: Wednesday, November 5, 8:00 AM to 11:00 AM
Level: Beginner, no prerequisites
42.
Using Success Stories to Promote Program Success
Success stories are
relevant to the practice of evaluation and are increasingly used to
communicate with stakeholders about a program's achievements. They are an
effective way for prevention programs to highlight program progress as these
programs are often unable to demonstrate outcomes for several years.
Therefore, communicating success during program development and
implementation is important for building program momentum and
sustainability.
This is an
interactive workshop that uses a combination of didactic presentation, group
exercises, and hands-on practice in order for participants to apply what is
learned to develop their own success stories. Each participant will develop
a list of possible success stories and from this list will develop one
elevator story and a draft of a one page success story.
You will learn:
Ann Webb Price
and Rene
Lavinghouze
are co-authors of
Impact and Value:
Telling Your Program's Success Story,
a workbook written for the CDC's Division of Oral Health (DOH). Price is the
lead for the Division of Oral Health's Success Story data collection project
for its 13 oral health grantees. Lavinghouze is the lead evaluator for DOH
and supervises this and all other division evaluation projects
Session 42:
Using
Success Stories
Scheduled:
Wednesday, November 5, 8:00 AM to 11:00 AM
Level: Beginner, no prerequisites
43.
The Value and Use of Evaluability Assessments
Evaluability assessments (EAs)
are a valuable and important tool to have in an evaluator's toolbox. While
rigorous evaluation is a valuable method, it can be costly and
time-consuming and is not an appropriate fit for every initiative. EAs offer
a cost-effective technique to help guide evaluation choices.
Using mini-lectures,
practice exercises and discussions, this workshop will provide you with an
understanding of EAs and how they can be applied in your own practice. Each
participant will develop a list of possible success stories and from this
list will develop one elevator story and a draft of a one page success
story.
You will learn:
Nicola Dawkins
is a Technical Director at Macro International Inc. As Director of the
Coordinating Center for the Early Assessment of Programs and Policies to
Prevent Childhood Obesity, Dawkins has conducted multiple EAs herself and
has led a large team of researchers in carrying out EAs of obesity
prevention initiatives around the country.
Session 43:
Evaluability
Assessments
Prerequisites:
A basic understanding of the goals and primary methods of full-scale
evaluation
Scheduled:
Wednesday, November 5, 8:00 AM to 11:00 AM
Level:
Intermediate
We will explore how to plan, organize, and implement team-oriented,
time-constrained, systematic qualitative methods whose results can stand
alone or complement quantitative data collection and analysis in process and
outcome evaluation work. Included among the topics addressed in this course
will be, single and multiple case study designs; site selection criteria
development and application; key informant / collaborator selection;
systematic qualitative data collection strategies and associated team
training/orientation; and key concepts in the use of text-based database
management software like N-6 and Atlas.
The session will include case studies, discussion, and a participatory
exercise designed to illustrate the difference between development of survey
items and ethnographic interviewing topic guides. You will leave with an
understanding of the ways the rapid ethnography can enhance your evaluation
process and improve your evaluation findings.
You
will learn:
Edward Liebow
is Associate
Director of Battelle's Centers for Public Health Research and Evaluation.
Liebow has conducted policy-related and evaluation research throughout the
western US and in South Australia focusing on applying ethnographic research
methods to understand the distinctive response of disadvantaged communities
to potential environmental and public health hazards posed by development
programs and policies.
Session 44:
Rapid
Ethnography
Scheduled:
Wednesday, November 5, 8:00 AM to 11:00 AM
Level:
Beginner,
no prerequisites
45.
Evaluation Capacity Building 101: Working within Your Organization
Evaluation capacity
building (ECB) within organizations requires long-term effort from ECB
practitioners and their organizational champions and partners. This workshop
describes evaluation capacity building, practical examples of evaluation
capacity building, pros and cons of these efforts and individual reflection
of ECB in participants' organizational evaluation cultures.
Through lectures, small group work and large group sharing, you will complete a basic plan for building ECB in your own organization. You will find the leverage points in your organization and engage in a discussion of next steps involved in an ECB effort there.
You will learn:
Nancy Franz
and Heather
Boyd are
both currently faculty members at Virginia Tech University, serving as a
program development specialist and a program evaluation specialist,
respectively. Franz recently co-authored a publication in the peer-reviewed
Journal of Extension focused on building institutional capacity for
communicating impacts. Boyd recently co-authored and article in the
peer-reviewed
Journal of Extension
entitled “An Exploratory Profile of Extension Evaluation Professionals.”
Session 45:
Capacity
Building
Scheduled:
Wednesday, November 5, 12:00 PM to 3:00 PM
Level: Beginner, no prerequisites
Empowerment Evaluation builds program capacity and fosters program
improvement. It teaches people to help themselves by learning how to
evaluate their own programs. The basic steps of empowerment evaluation
include: 1) establishing a mission or unifying purpose for a group or
program; 2) taking stock - creating a baseline to measure future growth and
improvement; and 3) planning for the future - establishing goals and
strategies to achieve goals, as well as credible evidence to monitor change.
The role of the evaluator is that of coach or facilitator in an empowerment
evaluation, since the group is in charge of the evaluation itself.
Employing lecture, activities, demonstration and case examples ranging from
townships in South Africa to a $15 million Hewlett-Packard Digital Village
project, the workshop will introduce you to the steps of empowerment
evaluation and tools to facilitate the approach. You
will join participants in conducting an assessment, using empowerment
evaluation steps and techniques.
You
will learn:
David Fetterman
hails from Stanford University and is the editor of (and a contributor to)
the recently published Empowerment Evaluation Principles in Practice
(Guilford). He Chairs the Collaborative, Participatory and Empowerment
Evaluation AEA Topical Interest Group and is a highly experienced and sought
after facilitator.
Session 46:
Empowerment Evaluation
Scheduled: Wednesday, November 5, 12:00 PM to 3:00 PM
Level:
Beginner, no prerequisites
47.
An Executive Summary is Not Enough: Effective Knowledge Transfer Techniques
for Evaluators
As an evaluator you are
conscientious about conducting the best evaluation possible, but how much
thought do you give to communicating your results effectively? Knowledge
transfer is an important skill for evaluators who care about seeing their
results disseminated and recommendations implemented. Drawing on current
research, this interactive workshop will present an overview of the key
principles of knowledge transfer and engage participants in a discussion of
its role in effective evaluation.
The workshop
features short lectures and brainstorming sessions, and you will have the
opportunity to work on a real example in groups. Those groups will be tasked
with conducting a 'mini-evaluation' and effectively presenting the results
using their newly acquired skills.
You will learn:
Kylie Hutchinson
has served since 2005 as the trainer for the Canadian Evaluation Society's
Essential Skills Series (ESS) in British Columbia. Her interest in
dissemination and knowledge transfer stems from ten years of experience as
an independent evaluation consultant.
Session 47:
Knowledge
Transfer
Scheduled:
Wednesday, November 5, 12:00 PM to 3:00 PM
Level:
Beginner, no prerequisites
48.
Building Evaluation Capacity Through Appreciative Inquiry and Soft Systems
Tools
In this half-day
workshop, you will use Appreciative Inquiry to explore successful strategies
for building evaluation capacity in organizations. You will be introduced to
Appreciative Inquiry and explore ways in which it may be applied in your own
evaluation work. Through real-world case examples, you will discuss
challenges and strategies in building evaluation capacity.
Through a
mini-lecture and interviews, you will explore the reasons why organizations
should build evaluation capacity, as well as successful strategies for
building evaluation capacity. You will work with case studies for building
evaluation capacity for: (1) the Gates-funded African Science Academies
Initiative, (2) grantees in a human anti-trafficking project in Albania, and
(3) an international health quality improvement organization.
You will learn:
Patty Hill
is is an evaluation and planning specialist
with 20 years of experience assisting organizations to understand and
measure their impact, and to build evaluation capacity and systems. Ms. Hill
approaches evaluation capacity building and systems development from an
appreciative evaluation perspective.
Mary Gutmann
is the Senior Research Specialist for EnCompass, and is engaged in
evaluation capacity building for several inernational development programs.
Session 48:
Building
Evaluation Capacity Through Appreciative Inquiry
Scheduled:
Wednesday, November 5, 12:00 PM to 3:00 PM
Level:
Beginner, no prerequisites
49.
Handling Data: From Logic Model to Final Report
This workshop is full. There are no more spaces available for this workshop and we do not maintain waiting lists. Please select an alternative workshop in which to enroll.
Collect, analyze and present data from complex evaluation studies in ways
that are feasible for the evaluator and meaningful to the client. Explore
lessons learned through over twenty years in evaluation consulting to ask
the right questions, collect the right data and analyze and present findings
in simple yet comprehensive ways.
We
will use actual data samples along with examples of analysis techniques. You
will have an opportunity to work in small groups with sample data and will
explore various analysis techniques. Throughout the workshop, the presenter
will respond to individual questions and facilitate group discussion on data
handling topics. At the end of the workshop, you will take away fresh ideas
to tackle you data handling challenges.
You
will learn:
Gail Barrington
started Barrington Research Group more than 20 years ago and has been
conducting complex evaluations ever since. A top rated facilitator, she has
taught workshops throughout the US and Canada for many years. She is the
2008 recipient of the Canadian Evaluation Society’s Award for Contribution
to Evaluation in Canada.
Session 49:
Handling Data
Prerequisites: Experience collecting data in evaluation projects - No
in-depth statistical knowledge required
Scheduled: Wednesday, November 5, 12:00 PM to 3:00 PM
Level: Intermediate
50.
Using Stories in Evaluation
Stories are an effective means of communicating the ways in which
individuals are influenced by educational, health, and human service
agencies and programs. Unfortunately, the story has been undervalued and
largely ignored as a research and reporting procedure. Stories are sometimes
regarded with suspicion because of the haphazard manner in which they are
captured or the cavalier promise of what the story depicts.
Through short lecture, discussion, demonstration, and hands-on activities,
this workshop explores effective strategies for discovering, collecting,
analyzing and reporting stories that illustrate program processes, benefits,
strengths or weaknesses. You will leave prepared to integrate stories into
your evaluation planning, data collection, and reporting.
You
will learn:
Richard Krueger
is a senior fellow at the University of Minnesota and has been actively
listening for evaluation stories for over a decade. He has offered
well-received professional development workshops at AEA and for non-profit
and government audiences for over 15 years. Richard is a past president of
AEA.
Session 50:
Using Stories
Scheduled:
Wednesday, November 5, 12:00 PM to 3:00 PM
Level:
Beginner, no prerequisites
51.
Your Policy Regarding the Use of Qualitative Data Analysis Software (QDAS)
Debating the virtues of
different qualitative data analysis software is a bit like debating whether
you should use Mac or PC if you are writing a novel. The relevance of the
debate is minor when weighed against other important policies and procedures
that influence the task at hand. This workshop will de-center the “which
package?” debate, and refocus the discussion on 'what is to be gained and
what is to be lost?' when using any of the current software options.
We
will explore how to position
appropriately any of the current software packages in the qualitative
research process and will build a policy document about the use and misuse of qualitative data
analysis software. Each of three segments includes time for
mini-lecture and demonstration, small group activities, and
a “community meeting” to share information from the small
group activities. The session will end with a collaborative refinement of a
“community of practice framework” for effectively positioning QDAS.
You will learn:
Kristi Jackson
is president of QUERI Inc. in Denver, Colorado. She began using qualitative
data analysis software in 1993, became an expert and trainer of one of the
leading software packages in 1996, and started her own company using the
software and coaching other researchers on the methodological implications
of software use in 2002.
Session 51:
Qualitative
Analysis Software
Scheduled:
Wednesday, November 5, 12:00 PM to 3:00 PM
Level:
Beginner,
no prerequisites
52. Advanced Applications of Program Theory
While
simple logic models are an adequate way to gain clarity and initial
understanding about a program, sound program theory can enhance
understanding of the underlying logic of the program by providing a
disciplined way to state and test assumptions about how program activities
are expected to lead to program outcomes.
Lecture, exercises, discussion, and peer-critique will help you to develop
and use program theory as a basis for decisions about measurement and
evaluation methods, to disentangle the success or failure of a program from
the validity of its conceptual model, and to facilitate the participation
and engagement of diverse stakeholder groups.
You
will learn:
Stewart Donaldson is Dean of the School of Behavioral and Organizational Sciences at Claremont Graduate University. He has published widely on the topic of applying program theory, developed one of the largest university-based evaluation training programs, and has conducted theory-driven evaluations for more than 100 organizations during the past decade.
Session 52: Adv Program Theory
Prerequisites: Experience or Training in Logic Models
Scheduled: Wednesday, November 5, 12:00 PM to 3:00 PM
Level: Intermediate
53.
Nonparametric Statistics: What to Do When Your Data is Skewed or Your Sample
Size is Small
So many of us have
encountered situations where we simply did not end up with the robust,
bell-shaped data set we thought we would have to analyze. In these cases,
traditional statistical methods lose their power and are no longer
appropriate. This workshop provides a brief overview of parametric
statistics in order to contrast them with non-parametric statistics.
Different data situations which require non-parametric statistics will be
reviewed and appropriate techniques will be demonstrated step by step.
This workshop will combine a
classroom style with some group work. The instructor will use a laptop to
demonstrate how to run the non-parametric statistics in SPSS. You are
invited to bring a laptop with SPSS to follow along. You are encouraged to
e-mail the facilitator prior to the conference with your specific data
questions which may then be chosen for problem-solving in the workshop.
You will learn:
Jennifer Camacho
is an Epidemiologist with the Chicago Department of Public Health, where she
works as internal evaluator. She regularly teaches informal courses on the
use of non-parametric statistics in the evaluation of small programs and
enjoys doing independent evaluative and statistical consulting.
Session 53:
Nonparametric Statistics
Scheduled:
Sunday,
November 9, 9:00 AM to 12:00 PM
Level:
Beginner,
no prerequisites
54.
Advanced Focus Group Moderator Training
The
literature is rich in textbooks and case studies on many aspects of focus
groups including design, implementation and analyses. Missing however are
guidelines and discussions on how to moderate a focus group.
In
this experiential learning environment, you will find out how to maximize
time, build rapport, create energy and apply communication tools in a focus
group to maintain the flow of discussion among the participants and elicit
more than one-person answers.
Using
practical exercises and examples, including role play and constructive
peer-critique as a focus group leader or respondent, you will explore
effective focus group moderation including ways to increase and limit
responses among individuals and the group as a whole. In addition, many of
the strategies presented in the workshop are applicable more broadly to in
other evaluation settings such as community forums and committee meetings to
stimulate discussion
You
will learn:
Nancy-Ellen Kiernan
has facilitated over 200 workshops on evaluation methodology and moderated
focus groups in 50+ studies with groups ranging from Amish dairy farmers in
barns to at-risk teens in youth centers, to university faculty in
classrooms. She is on the faculty at Penn State University and a regular
workshop presenter at AEA’s annual conference.
Session 54:
Moderator Training
Prerequisites: Having moderated 2 focus groups and written focus
group questions and probes
Scheduled: Sunday, November 9, 9:00 AM to 12:00 PM
Level: Intermediate
55.
Conflict Resolution Skills for Evaluators
Unacknowledged and unresolved conflict can challenge even the most skilled
evaluators. Conflict between evaluators and clients and among stakeholders
create barriers to successful completion of the evaluation project. This
workshop will delve into ways to improve listening, problem solving,
communication and facilitation skills and introduce a streamlined process of
conflict resolution that may be used with clients and stakeholders.
Through a hands-on, experiential approach using real-life examples from
program evaluation, you will become skilled at the practical applications of
conflict resolution as they apply to situations in program evaluation. You
will have the opportunity to assess your own approach to handling conflict
and to build on that assessment to improve your conflict resolution skills.
You
will learn:
Jeanne Zimmer
has
served as Executive Director of the Dispute Resolution Center since 2001 and
is completing a doctorate in evaluation studies with a minor in conflict
management at the University of Minnesota. For over a decade, she has been a
very well-received professional trainer in conflict resolution and
communications skills.
Session 55:
Conflict Resolution
Scheduled: Sunday, November 9, 9:00 AM to 12:00 PM
Level: Beginner, no prerequisites
56.
Performance Measurement in Evaluations of Federal Programs
Federal grants and programs are
increasingly emphasizing the importance of evaluation, often through
performance measurement. Strong project objectives and measurable
performance measures are critical to both good proposals and successful
evaluations. This workshop will teach you how to develop high quality
project objectives and performance measures using a framework that is
consistent with the federal government's performance measurement criteria.
In mini-lectures,
group discussions and practice exercises, you will be provided with
practical strategies and planning devices to use when writing project
objectives and measures and planning evaluations focused on performance
measurement. You will increase your understanding of the relationships
between project activities and intended program outcomes through the
development of logic models. This will assist in the development of more
sound evaluation designs, which will in turn allow for the collection of
higher-quality and more meaningful data.
You will learn:
To see how
objectives and performance measures can easily fit into the performance
reports required by the U.S. Department of Education and other funders,
Courtney L Brown
and
Mindy Hightower King
are faculty members at Indiana University, where they are Senior Associates
directing and managing evaluations for local and state agencies,
foundations, and non-profit organizations. They provide ongoing technical
assistance and training to the U.S. Department of Education grantees in
order to strengthen performance measurement.
Session 56.
Performance
Measurement
Scheduled:
Sunday, November 9, 9:00 AM to 12:00 PM
Level:
Beginner
57.
Internet Survey Construction and Administration
This workshop will
present introductory level information about online survey designs. We'll examine methods for generating clear and valid questions, by
using appropriate response formats. Other topics include methodology for
increasing response rates and reliability, along with online survey
construction and administration.
Information will be disseminated
using mini-lectures, computer demonstrations, and hands-on exercises. You
will be shown how to use online survey hosting sites such as
surveymonkey.com and questionpro.com, and you will become familiar with
programs such as Remark Web Survey and Broadcast mass e-mailing.
You will learn:
Joel T Nadler and Nicole L Cundiff are primary trainers of web-based programs and techniques at Applied Research Consultants (ARC), a graduate student-run consulting firm at Southern Illinois University Carbondale (SIUC), and have written numerous reports utilizing web based surveys. Rebecca Weston is an associate professor of psychology at SIUC who has taught graduate and undergraduate classes in psychological measurement.
Session 57.
Internet Survey
Scheduled:
Sunday, November 9, 9:00 AM to 12:00 PM
Level:
Beginner