SCHEDULE OVERVIEW
¨
WORKSHOP
¨
INDEX OF CONCURRENT SESSIONS
¨
KEYNOTES
¨
SESSIONS
PRE-INSTITUTE WORKSHOPS
Note that these pre-institute workshops are not included in
standard Institute registration, and require an
additional payment, but may be registered for on the
same form.
PI1:
Quantitative Methods for Evaluation
Level:
Beginner
Description:
Brush up on your statistics! We will focus primarily on
descriptive statistics: measures of central tendency
(e.g., the mean) and measures of dispersion (e.g.,
standard deviation), defining what these terms mean,
when and how to use them, and how to present them to
evaluation stakeholders. From there, we will move to a
basic discussion of inferential statistics, the
difference between a statistic and a parameter, and the
use of population parameters. The workshop will involve
demonstrations by the presenter using SPSS software as
well as hands-on exercises for you to calculate some of
the descriptive statistics we cover. This workshop is
not designed to teach you how to use SPSS or any other
statistical software package, but rather to introduce
you to the process of statistical analysis and
interpreting statistical output. We will focus on
interpreting statistical indices (e.g., means and
standard deviations) as well as graphical output, such
as histograms and stem-and-leaf plots. The understanding
you gain regarding the concepts presented in this
course, coupled with the chance to interpret statistical
output, should ready you to employ basic descriptive
statistics as well as to interpret the work of others.
Audience:
Attendees who have never taken a statistics course
before, or who feel they need a refresher workshop on
basic statistics. Those who attend this workshop need no
prior background in statistics
Katherine McKnight,
has taught statistics workshops at AEA's annual
conferences for many years. An outstanding facilitator,
she is known for making difficult concepts accessible
and interesting.
Offered:
PI2: Introduction to Evaluation
Level:
(Advanced)
Beginner
Description:
This
workshop will provide an overview of program evaluation
for Institute participants with some, but not extensive,
prior background in program evaluation. The session will
be organized around the Centers for Disease Control and
Prevention’s (CDC) six-step Framework for Program
Evaluation in Public Health as well as the four sets
of evaluation standards from the Joint Commission on
Evaluation Standards. The six steps constitute a
comprehensive approach to evaluation. While its origins
are in the public health sector, the Framework approach
can guide any evaluation. The course will touch on all
six steps, but particular emphasis will be put on the
early steps, including identification and engagement of
stakeholders, creation of logic models, and
selecting/focusing evaluation questions. Several case
studies will be used both as illustrations and as an
opportunity for participants to apply the content of the
course and work through some of the trade-offs and
challenges inherent in program evaluation in public
health and human services.
Audience:
Attendees with some background in evaluation, but who
desire an overview and an opportunity to examine
challenges and approaches. Cases will be from public
health but general enough to yield information
applicable to any other setting or sector.
Thomas Chapel,
is a Senior Evaluation Scientist in the Office of
Workforce and Career Development, at the Centers for
Disease Control and Prevention. He serves as a central
resource on strategic planning and program evaluation
for CDC programs and their partners. Before joining CDC,
Mr. Chapel was Vice-President of the Atlanta office of
Macro International where he directed and managed
projects in program evaluation, strategic planning, and
evaluation design for public and non-profit
organizations. He is a frequent presenter at national
meetings, a frequent contributor to edited volumes and
monographs on evaluation, and has facilitated or served
on numerous expert panels on public health and
evaluation topics. Mr. Chapel is active nationally and
locally in the American Evaluation Association (AEA),
currently as past-chair of the Membership Committee and
convener of AEA’s Local Affiliate Collaborative. Mr.
Chapel holds a BA degree from Johns Hopkins University
and MA in public policy and MBA degrees from the
University of Minnesota.
Offered:
SCHEDULE OVERVIEW
¨
WORKSHOP
¨
INDEX OF CONCURRENT SESSIONS
¨
KEYNOTES
¨
SESSIONS
INDEX OF
CONCURRENT SESSIONS BY TIMESLOT
SCHEDULE OVERVIEW
¨
WORKSHOP
¨
INDEX OF CONCURRENT SESSIONS
¨
KEYNOTES
¨
SESSIONS
KEYNOTE
SESSION DESCRIPTIONS
Monday,
June 15, Keynote 1:
(Patton)
Developmental Evaluation: Systems
Thinking and Complexity Perspectives Applied to
Evaluating Social Innovation
Description:
Developmental Evaluation (DE) refers to long-term,
partnering relationships between evaluators and those
engaged in innovative initiatives. DE is an alternative
to formative and summative evaluation for complex
situations where the context, initiative, and nature of
relationships may change over time as the program
evolves. DE allows for the use of evaluative data
throughout the program cycle, allows for corrections
along the way, and builds an ongoing commitment to
data-driven decision-making.
Michael Quinn Patton
is the author of five major
books in the field of evaluation, including Utlization-focused Evaluation and an upcoming
publication on developmental evaluation that serves as
the basis for today’s presentation. He is the former
President of the American Evaluation Association and the
only recipient of both the Alva and Gunner Myrdal Award
for Outstanding Contributions to Useful and Practical
Evaluation from the Evaluation Research Society and the
Paul F. Lazarsfeld Award for Lifelong Contributions to
Evaluation Theory from the American Evaluation
Association. A noted speaker and trainer, he is sought
out for his insights, intellect, and engaging
presentation style.
Offered as Keynote Address:
Tuesday,
June 16,
Keynote 2:
(Chapel and MacDonald)
Program
Evaluation Meets the Real World: Reflections on CDC's
Evaluation Framework After 10 Years
Description:
In 1999, the Centers for Disease Control and Prevention
(CDC) published the Framework for Program Evaluation in
Public Health
http://www.cdc.gov/eval/framework.htm.
The steps and standards aimed to increase the quality of
program evaluation, and use of evaluation findings for
program improvement. Applicable to wide range of
settings, the Framework emphasized stakeholder
engagement throughout the evaluation process, and a
strong program description as a foundation for designs
that most often lead to the use of findings. As we
approach the 10-year anniversary of its release, a group
of evaluators from across CDC reflected on their
experience using the Framework. The group explored the
complexities of real-world program evaluation, and
identified the added-value of the approach and specific
steps. Following a series of meetings and a survey of
practitioners, they offer suggested refinements to the
Framework that have wider relevance to any applied
evaluation approach. In this plenary session, Tom Chapel
and Goldie MacDonald discuss the workgroup’s findings,
including lessons learned in using the Framework, and
practical recommendations on how to support evaluation
and grow evaluation capacity in any large organization.
Thomas Chapel is Senior Evaluation Scientist in
the CDC’s Office of Workforce and Career Development,
where he assists CDC programs and partners with program
evaluation and strategic planning. He recently served
as CDC's Acting Chief Performance Officer. Goldie
MacDonald is a Health Scientist in the Coordinating
Office for Global Health (COGH) at CDC. She leads and
coordinates monitoring and evaluation of CDC investments
in international avian and pandemic influenza
preparedness.
Offered as Keynote Address:
Wednesday,
June 17,
Keynote 3:
(Goodman) Looking Back at 20
Years in Health Promotion Evaluation
Description:
Dr. Goodman will share “larger lessons”
that he learned from over 20 years of evaluating health
promotion programs. The lessons draw from
evaluation of programs funded by The Children’s Defense
Fund, The US Centers for Disease Control and Prevention,
The Office of Women’s Health, and Philanthropic
Foundations. The programs focus on several
important public health concerns including access to
prenatal care among poor and underrepresented,
advancement of Centers of Excellence for Women’s Health
in teaching hospitals, breast and cervical cancer
programs in rural areas, and community-based diabetes
programming. The lessons derived may be applied
not only to evaluation practice, but also to building a
professional disposition towards the work.
Robert M Goodman
is a Professor and Dean of the School of Health,
Physical Education and Recreation at Indiana University.
Dr. Goodman has written extensively on issues concerning
community health development, community capacity,
community coalitions, evaluation methods, organizational
development, and the institutionalization of health
programs. He has been the principal investigator and
evaluator on projects for CDC, The National Cancer
Institute, The Centers for Substance Abuse Prevention,
The Children’s Defense Fund, and several state health
departments. In 2004, Dr. Goodman received the
Distinguished Fellow Award from the Society for Public
Health Education, the highest honor it bestows.
Currently, Dr. Goodman is consulting on community-based
public health practices and empowerment evaluation with
the Diabetes Translation and Injury Prevention Branches
at CDC. Also, he is leading an evaluation of
community-based approaches to increasing interest in
cancer clinical trials.
Offered as Keynote Address:
SCHEDULE OVERVIEW
¨
WORKSHOP
¨
INDEX OF CONCURRENT SESSIONS
¨
KEYNOTES
¨
SESSIONS
CONCURRENT
SESSION DESCRIPTIONS
Level: Intermediate
Description:
While simple logic models are an adequate way to gain
clarity and initial understanding about a program, sound
program theory can enhance understanding of the underlying
logic of the program by providing a disciplined way to state
and test assumptions about how program activities are
expected to lead to program outcomes. Lecture, exercises,
discussion, and peer-critique will help you to develop and
use program theory as a basis for decisions about
measurement and evaluation methods, to disentangle the
success or failure of a program from the validity of its
conceptual model, and to facilitate the participation and
engagement of diverse stakeholder groups.
You will learn:
-
To employ program theory to understand the logic of a
program,
-
How program theory can improve evaluation accuracy and
use,
-
To use program theory as part of participatory
evaluation practice.
Audience:
Attendees with a basic background in evaluation and
familiarity with logic models and program theory
Stewart I. Donaldson
is Professor and Chair of
Psychology, Director of the Institute of
Organizational and Program Evaluation Research, and
Dean of the School of Behavioral and Organizational
Sciences, Claremont Graduate University. He has
conducted numerous evaluations, developed one of the
largest university-based evaluation training
programs, published many evaluation articles and
chapters, and his recent books include What
Counts as Credible Evidence in Applied Research and
Evaluation Practice? (2008; with C. Christie &
M. Mark), Program Theory-Driven Evaluation
Science: Strategies and Applications (2007), Applied Psychology:
New Frontiers and
Rewarding Careers (2006; with D. Berger & K.
Pezdek), Evaluating Social Programs and Problems:
Visions for the New Millennium (2003; with M.
Scriven), Social Psychology and Evaluation
(forthcoming; with M. Mark & B. Campbell). He is
co-founder of the Southern California Evaluation
Association and is on the Editorial Boards of the American Journal of Evaluation and
New Directions for Evaluation.
Offered:
Offering 2: What Counts as Credible Evidence in Contemporary
Evaluation Practice: Moving Beyond the Debates
Level: Beginner to Intermediate
This workshop is designed to explore one of the most
fundamental issues facing evaluators today, and the 4th step
in CDC's Framework for Program Evaluation, what counts as
credible evidence in contemporary evaluation practice? Many
thorny debates about what counts as evidence have occurred
in recent years, but few have sorted out the issues in a way
that directly informs contemporary evaluation and
evidence-based practice. Participants will come away from
this workshop with an understanding of the philosophical,
theoretical, methodological, political, and ethical
dimensions of gathering credible evidence and will apply
these dimensions to fundamental evaluation choices we
encounter in applied settings.
Audience: Attendees should have a basic background in evaluation
Stewart
I. Donaldson is Professor and Chair of Psychology,
Director of the Institute of Organizational and Program
Evaluation Research, and Dean of the School of Behavioral
and Organizational Sciences, Claremont Graduate University.
He has conducted numerous evaluations, developed one of the
largest university-based evaluation training programs,
published many evaluation articles and chapters, and his
recent books include What Counts as Credible Evidence in
Applied Research and Evaluation Practice? (2008; with C.
Christie & M. Mark), Program Theory-Driven Evaluation
Science: Strategies and Applications (2007), Applied
Psychology: New Frontiers and Rewarding Careers
(2006; with D. Berger & K. Pezdek), Evaluating Social
Programs and Problems: Visions for the New Millennium
(2003; with M. Scriven), Social Psychology and Evaluation
(forthcoming; with M. Mark & B. Campbell). He is co-founder
of the Southern California Evaluation Association and is on
the Editorial Boards of the American Journal of
Evaluation and New Directions
for Evaluation.
Offered (Two Rotations of the Same Content - Do not register
for both):
-
Monday, June 15, 2:30 – 4:00 PM
-
Tuesday, June 16, 2:30 – 4:00 PM
Offering 3: Leading Concepts in Community Health
Evaluation
Level:
Intermediate
In this
session participants will learn different evaluation
methodologies for community assessments and be able to
apply multiple approaches to the evaluation of
community-based health promotion programs. Among other
topics, the course will address:
-
Dr.
Goodman’s FORECAST method of evaluating complex
community programs, including how to develop models
for complex programs and then use them to develop
markers, measures, and standards (meanings) for
measuring program adequacy.
-
Social ecological perspectives including assessing
and determining adequacy of interventions from a
social ecology perspective
-
Community capacity including expanding our
perspectives of community capacity within the
construct of health promotion, identifying linkages
that interconnect to strengthen community capacity,
and developing an evaluation approach that assesses
the development of community capacity
-
Uncovering “deeper structural meanings” in community
responses to evaluation
Audience:
Attendees working with community health programs with
experience conducting evaluations.
Robert M. Goodman
is a Professor and Dean of the School of Health,
Physical Education and Recreation at Indiana University.
Dr. Goodman has written extensively on issues concerning
community health development, community capacity,
community coalitions, evaluation methods, organizational
development, and the institutionalization of health
programs. He has been the principal investigator and
evaluator on projects for CDC, The National Cancer
Institute, The Centers for Substance Abuse Prevention,
The Children’s Defense Fund, and several state health
departments. In 2004, Dr. Goodman received the
Distinguished Fellow Award from the Society for Public
Health Education, the highest honor it bestows.
Currently, Dr. Goodman is consulting on community-based
public health practices and empowerment evaluation with
the Diabetes Translation and Injury Prevention Branches
at CDC. Also, he is leading an evaluation of
community-based approaches to increasing interest in
cancer clinical trials.
Offered:
Offering 4: Systems Level Evaluation of Communities of
Practice
(new!)
Level:
Intermediate
Description:
In an environment of increasing social participation and
transparency, communities of practice are one means to
unite a variety of partners to address common problems
and share resources and learning opportunities. When
asked to design an evaluation of this type of complex
social initiative, evaluators increasingly turn to
system level evaluation. One such tool to frame a system
level evaluation is the use of theory of change. Theory
of change is an approach that links activities,
outcomes, and contexts in a way that maximizes the
attribution of interventions to outcomes. The evaluation
of communities in a public health informatics setting
will illustrate this work. This workshop will use
lecture, exercises and discussion to improve attendees’
ability to understand the application of a systems level
evaluation to communities of practice, how to design an
explicit model of theory of change, with some attention
to paid to operationalize the model.
You will learn:
-
What communities of practice are and how they are
used
-
How to view communities of practice within a
systems-level framework
-
How to use theory of change to evaluate communities
of practice and their impacts
Audience:
Attendees with a basic understanding of systems
approaches who would like to improve their evaluation of
large-scale social programs
Andrea M. Hegedus,
is
a Health Research Analyst for Northrop Grumman
Corporation who is currently working at the National
Center for Public Health Informatics at the CDC as the
evaluation lead for the communities of practice program.
She has over 25 years of experience including evaluation
of behavioral healthcare programs and complex social
initiatives. Jan C. Jernigan, is a Senior
Evaluator in the Division of Nutrition, Physical
Activity and Obesity at CDC. She has taught program
evaluation courses and conducted program evaluation and
evaluation research for over 15 years. Drs. Hegedus and
Jernigan both received their doctorates from the
Graduate School of Public and International Affairs at
the University of Pittsburgh with a specialty in public
policy research and analysis.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Monday, June 15, 9:25 – 12:45 (20 minute break
within)
-
Wednesday,
June 17, 9:25 – 12:45 (20 minute break within)
Offering 5:
Managing Program Evaluation: Towards Explicating a
Professional Practice
(new!)
Description: Drawing
on
“Managing Program Evaluation: Towards Explicating a
Professional Practice,” (New Directions for
Evaluation, 2009), we will examine key questions for
the field of evaluation in relation to managing. We will
identify the stages of managerial skill from novice to
expert and conceptual and practical definitions of
effective managing. We will provide tools and approaches
to conduct assessments of the organizational arrangement
in which you practice evaluation and elements of your
orientation, knowledge and expertise in effectively
managing evaluation. The class will also offer an
opportunity to reflect on your personal and professional
experience as you engage in the practice of managing
evaluation studies, evaluators and other staff, and
evaluation units.
Audience:
Those who are managing or aspire to manage evaluation
studies, evaluators and other professional staff or an
evaluation unit.
Don Compton
is
currently co-lead of the evaluation team at CDC in the
Division of Nutrition, Physical Activity and Obesity.
Prior to joining the CDC, he was the Director of
Evaluation at the American Cancer Society for over eight
years where he conducted national evaluations and
consulted with national and state-level staff and
volunteers on program evaluation.
Michael Schooley
is
chief of the Applied Research and Evaluation Branch,
Division for Heart Disease and Stroke Prevention at CDC.
In this capacity, he provides leadership and vision to
applied research and program evaluation activities to
facilitate CDC’s response to public health issues
related to the prevention and control of heart disease
and stroke.
Offered:
Offering 6: Qualitative Interviewing: Asking the Right
Questions in the Right Way
Level:
Intermediate Description:
Preparing a proper interview guide is only a first step
to proper Q & A in a qualitative data collection
episode. This session outlines key sections to include
in an interview guide and offers suggestions for how to
conduct a qualitative interview and/or focus group. The
face-to-face interaction in this case is critical. The
interviewer must balance attention to the questions
designed for the interaction and the emergent topics in
the interview. Core skills that focus attention on the
audience for the study, the topics of the project,
important lines of questioning and goals for ensuring
quality interaction in this relationship improve the
quality of data collection.
Audience: Researchers in any discipline with a basic knowledge of
qualitative analysis who are interested in using
conversational techniques in the form of interviews or
focus groups
Raymond C. Maietta
is president of ResearchTalk Inc., a qualitative
research consulting company in Bohemia, New York. A
sociologist from the State University of New York at
Stony Brook, Ray's interests in the art of qualitative
research methods motivated him to start ResearchTalk in
1996. ResearchTalk Inc. provides advice on all phases of
qualitative analysis to university, government,
not-for-profit and corporate researchers. Work with
ResearchTalk clients using qualitative software informs
recent publications: Systematic Procedures of Inquiry
and Computer Data Analysis Software for Qualitative
Research (with John Creswell in Handbook of
Research Design and Social Measurement, Sage 2002)
and State of the Art: Integrating Software with
Qualitative Analysis (in Leslie Curry, Renee Shield
and Terrie Wetle, (Eds.) Applying Qualitative and
Mixed Methods in Aging and Public Health Research,
American Public Health Association and the
Gerontological Society of America 2006). More than 12
years of consultation with qualitative researchers
informs the methods book Dr. Maietta is writing. Sort
and Sift, Think and Shift will be completed in 2009.
Offered (Two Rotations of the Same Content - Do not register
for both):
-
Monday, June 15, 9:25 – 12:45 (20 minute break
within)
-
Tuesday, June 16, 9:25 – 12:45 (20 minute break
within)
Offering 7:
Analyzing Qualitative
Data: Using Qualitative Data Analysis Software to
Balance the Expected and Unexpected
Level:
Intermediate Description:
Ray
Maietta's ‘Sort and Sift, Think and Shift’ qualitative
method informs the content of this session. Qualitative
evaluations are often defined by pre-determined goals
and questions to pursue in analysis. However, issues
emerge in initial document reviews that both confirm and
challenge these goals. This session addresses ways to
use qualitative data analysis software, specifically
ATLAS.ti and MAXQDA, to facilitate serendipitous
discovery and to balance new ideas with pre-existing
questions for the study. We will discuss ways to ensure
that software is always a tool that supports your
exploration rather than it being a driver that defines
where you are to go.
Audience: Researchers in any discipline who have collected
qualitative data in the form of interviews, focus groups
or fieldnotes
Raymond C. Maietta
is president of ResearchTalk Inc., a qualitative
research consulting company in Bohemia, New York. A
sociologist from the State University of New York at
Stony Brook, Ray's interests in the art of qualitative
research methods motivated him to start ResearchTalk in
1996. ResearchTalk Inc. provides advice on all phases of
qualitative analysis to university, government,
not-for-profit and corporate researchers. Work with
ResearchTalk clients using qualitative software informs
recent publications: Systematic Procedures of Inquiry
and Computer Data Analysis Software for Qualitative
Research (with John Creswell in Handbook of
Research Design and Social Measurement, Sage 2002)
and State of the Art: Integrating Software with
Qualitative Analysis (in Leslie Curry, Renee Shield
and Terrie Wetle, (Eds.) Applying Qualitative and
Mixed Methods in Aging and Public Health Research,
American Public Health Association and the
Gerontological Society of America 2006). More than 12
years of consultation with qualitative researchers
informs the methods book Dr. Maietta is writing. Sort
and Sift, Think and Shift will be completed in 2009.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Monday, June 15, 2:30 – 4:00 PM
-
Tuesday, June 16, 2:30 – 4:00 PM
Offering 8: Evaluating Complex Systems: The CDC-INFO
Framework
Level:
Intermediate
Description:
Traditional approaches to evaluation are adaptations of
the “experimental model” oriented to assessing an
“intervention” or other change event for its impacts on
specified outcomes. Many public programs provide
services to the public that do not easily fit this
“intervention” model, yet will benefit from evaluative
feedback. CDC-INFO, CDC’s consolidated service for
responding to telephone and e-mail information contacts
from the professional and lay public, is an example,
providing continuous real-time and quick response
information. This session uses the ongoing CDC-INFO
evaluation to explicate a framework for designing and
implementing monitoring and evaluation services that
provide continuous monitoring and evaluation feedback
that addresses quality improvement, outcome assessment,
and surveillance and planning information needs in an
integrated data collection, analysis and reporting
system. The presentation addresses design and
implementation phases including 1) mapping the
information and decision system, 2) supporting quality
assessment and improvement, 3) assessing outcomes, 4)
using service data for surveillance and planning, 4)
building collaboration, and 5) reporting for diverse
audiences.
You
will learn:
-
To map information needs in complex service systems
providing continuous services;
-
To develop integrated data collection and analysis that
supports continuous quality improvement, performance
monitoring with respect to processes, outcomes and
impacts (e.g., information utility and application);
-
To work with multiple stakeholders and report
information relevant to different categories of
user.
Audience:
Attendees with any combination of basic background in
evaluation, experience in program management and
decision making, interest in using evaluative
information for quality improvement, outcome assessment,
or planning.
Elizabeth Harris
is
Vice President of EMT Associates, Inc., and Project
Director for the independent evaluation of CDC-INFO.
She has
over 15 years of experience in evaluation with federal,
state and local agencies. She has served as principal
investigator on national evaluation studies, developed
data collection instruments and has been responsible for
data collection, analysis and report writing. Dr. Harris
oversees a staff of 25 research associates and research
assistants.
Offered:
Offering 9: Focus Group Research: Understanding,
Designing and Implementing
Level: All
Description: As a qualitative research method, focus groups are an
important tool to help researchers understand the
motivators and determinants of a given behavior. This
course provides a practical introduction to focus group
research. At the completion of this course, participants
will be able to 1) identify and discuss critical
decisions in designing a focus group study, 2)
understand how research or study questions influence
decisions regarding segmentation, recruitment, and
screening; and, 3) identify and discuss different types
of analytical strategies and focus group reports.
Audience:
Attendees
working in any context
who are new to focus group facilitation
Michelle Revels and
Bonnie Bates are technical directors at ORC Macro
specializing in focus group research and program
evaluation. Ms. Revels attended Hampshire College in
Amherst, Massachusetts and the Hubert H. Humphrey
Institute of Public Affairs at the University of
Minnesota. Ms. Bates, also a trained and experienced
focus group moderator and meeting facilitator, received
her bachelor’s and master’s degree in criminal justice
from the University of Maryland.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Monday, June 15, 9:25 – 12:45 (20 minute break
within)
-
Wednesday,
June 17, 9:25 – 12:45 (20 minute break within)
Offering 10: Theory-Driven Evaluation for Assessing and
Improving Program Planning, Implementation, and
Effectiveness
Level:
Intermediate
Description:
Learn
the theory-driven approach for assessing and improving
program planning, implementation and effectiveness. You
will explore the conceptual framework of program theory
and its structure, which facilitates precise
communication between evaluators and stakeholders
regarding evaluation needs and approaches to address
those needs. Mini-lectures, group exercises and case
studies will be used to illustrate the use of program
theory and theory-driven evaluation for program
planning, initial implementation, mature implementation
and outcomes. In addition, the participants will learn
principles and strategies for using the theory-driven
approach to deal with the following cutting edge issues:
how to go beyond traditional methodology for designing a
real world evaluation, how to achieve both internal and
external validity in an evaluation, and how to use
program theory for guiding the application of mixed
methods in an evaluation.
Audience:
Attendees with a basic background in logic models and/or
program theory.
Huey Chen
is
a senior evaluation scientist at the CDC. He was a
professor at the School of Public Health at the
University of Alabama at Birmingham until January 2008.
Dr. Chen has contributed to the development of
evaluation theory and methodology, especially in the
areas of program theory, theory-driven evaluations, and
evaluation taxonomy. His book Theory-Driven
Evaluations has been recognized as one of the
landmarks in program evaluation and his newest text, Practical Program Evaluation, offers an accessible
approach to evaluation for those working in any context.
In 1993 he received the AEA Paul F. Lazarsfeld Award for
contributions to Evaluation Theory and in 1998 he
received the CDC Senior Biomedical Research Service
Award.
Offered (Two Rotations of the Same Content - Do not register
for both):
-
Tuesday,
June 16, 9:25 – 12:45 (20 minute break within)
-
Wednesday,
June 17, 9:25 – 12:45 (20 minute break within)
Offering 11: How Policy Is Made and How Evaluators Can
Affect It (new!)
Level: All
Description: This
session will explain how public policy processes work
and how evaluators can get their evaluations noticed and
used by policy makers. It will guide evaluators through
the maze of policy processes, such as legislation,
regulations, administrative procedures, budgets,
re-organizations, and goal setting. It will also explain
how policy makers can be identified and reached and
describe their preferred writing and other communication
styles, what they value in terms policy advice, and when
they are open to it. The session will show how to
effectively present evaluation findings and
recommendations to decision makers of congressional and
executive branches of Federal, state, and local
governments as well as boards and administrative offices
of foundations and not-for-profit public service
organizations. It will also explain how to influence
policies that affect the evaluation profession.
Audience: This
session is designed for evaluators of any experience
level who wish to have impact on public policies through
their evaluations.
George F. Grob is President of the Center for
Public Program Evaluation. He is an independent
consultant focusing on evaluation of public programs and
related fields of policy development and performance
management. He currently serves as consultant to the AEA
Evaluation Policy Task Force. Other recent projects
include work for the Robert Wood Johnson Foundation,
National Aquarium at Baltimore, NOAA’s National Marine
Fisheries Administration, and the Agency for Families
and Children. Before establishing this consultancy, he
served as Executive Director of the Citizens’ Health
Care Working Group and Deputy Inspector General for
Evaluation and Inspections of the U.S. Department of
Health and Human Services. He has testified before
Congress more than two dozen times.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Monday, June 15, 2:30 – 4:00 PM
-
Tuesday, June 16, 2:30 – 4:00 PM
Offering 12:
Minding Your Mind (new!)
Level:
All
Description:
Evaluators and other professionals often overlook the
importance of understanding and developing the one tool
that they use in all facets of their work—their own
brains. They seldom pay attention to how their brains
work day by day; how memories are created, organized,
and accessed; under what physical circumstances creative
inspirations arise; how different ideas get connected to
one another; how food choices, sleep, exercise,
unstructured time, personal life problems, and other
factors affect what they think about; and how easily or
stressfully they handle various thinking chores. This
course addresses these factors and explains how they can
be controlled and corralled to make thinking easier,
less stressful, and more productive. This is not about
“gimmicks” but is based on an the physiological aspects
of brain functioning, research on food and the mind, and
the working habits and experiences of successful people
involved in intellectual activities.
Audience:
This session is designed for evaluators and other
professionals of any experience level who feel
overwhelmed by having to keep too many balls in the air
at one time, are exasperated from deadlines and
paperwork, and wish to become more efficient and
effective and derive greater enjoyment and success in
their intellectual pursuits.
George F. Grob is President of the Center for
Public Program Evaluation. He is an independent
consultant focusing on evaluation of public programs and
related fields of policy development and performance
management. He currently serves as consultant to the AEA
Evaluation Policy Task Force. Other recent projects
include work for the Robert Wood Johnson Foundation,
National Aquarium at Baltimore, and several Federal
agencies. Previously, he served as Deputy Inspector
General for Evaluation and Inspections of the U.S.
Department of Health and Human Services where he and his
staff systematically looked for ways to enjoy and
succeed in their work. He has lectured on this and other
psychological aspects of evaluation and other
professional work.
Offered:
Level:
Intermediate
Description:
This
skill-building session addresses the centrality of
culture in evaluation. It is organized in two segments.
The opening segment addresses the relevance of culture
to all stages of the evaluation process, to the
fundamental validity of our work as evaluators, and to
ethical standards and guidelines of our profession.
Presenters will use an FAQ format to raise questions and
address common misconceptions that marginalize
discussions of culture within the evaluation community
(e.g., Is “culture” really just a code-word for “race”?
How does culture apply to me as a white evaluator
working within predominantly white populations? What is
the “value added” of culture in evaluation? Why should I
care?) The second segment extends cultural relevance to
present strategies for building cultural competence
through experience, education and self-awareness.
Theoretical frameworks that situate culture in
evaluation (e.g., Frierson, Hood & Hughes, 2002; Hall &
Hood, 2005; Kirkhart, 2005) are presented as advance
organizers for practice and application purposes.
Presenters use case scenarios and participants’ own
examples to integrate workshop content with
participants’ field experience, interests, and concerns.
They rely on various theoretical frameworks to guide the
two segments in tangible and practical ways. Additional
resources are provided to extend and reinforce
participant learning.
Audience: Attendees working in any context with a
basic background in evaluation.
Karen E. Kirkhart
holds a Ph.D. in Social Work and Psychology from The
University of Michigan and is currently Professor,
School of Social Work, College of Human Ecology,
Syracuse University. Rodney K. Hopson
has undergraduate and graduate degrees in English
Literature, Educational Evaluation, and Linguistics from
the University of Virginia, and he is Associate
Professor and Chair, Department of Educational
Foundations and Leadership and faculty member in the
Center for Interpretive and Qualitative Research at
Duquesne University. Karen and Rodney have served in
positions of leadership within the American Evaluation
Association, and both are actively involved in education
and scholarship on culture, diversity, and social
justice in evaluation. Rodney serves as Project Director
for the American Evaluation Association/Duquesne
University Graduate Education Diversity Internship
Program. Karen is a member of the AEA Diversity
Committee task force charged with developing a public
interest statement on the subject of cultural competence
and evaluation.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Monday, June 15, 9:25 – 12:45 (20 minute break
within)
-
Tuesday,
June 16, 9:25 – 12:45 (20 minute break within)
Level: Beginner
Description: Communicating
evaluation processes and results is one of the most
critical aspects of evaluation practice. This is
especially true when learning and evaluation use is an
expected goal of the evaluation. Yet, evaluators
continually experience frustration with hours spent on
writing reports that are seldom read, or shared with
others. This session will explore all aspects of
creating and working with the final evaluation report to
maximize its relevance and credibility for primary
evaluation audiences. In this session, you will explore:
final report basics including its major components,
identification of audience characteristics to inform
report structure and content, various structures for
increasing the report’s relevance to primary audiences,
combining qualitative and quantitative data, alternative
formats for making content easier to assimilate, and
strategies for conducting working sessions that take the
final report into action planning with evaluation
stakeholders.
Audience:
Attendees working in any context with a basic
understanding of evaluation processes as well as the
relevant stakeholders to an evaluation.
Rosalie T. Torres is
president of Torres Consulting Group, a research,
evaluation and management consulting firm specializing
in the feedback-based development of programs and
organizations since 1992. She earned her Ph.D. in
research and evaluation from the University of Illinois
and formerly was the Director of Research, Evaluation,
and Organizational Learning at the Developmental Studies
Center, an educational non-profit. Over the past 28
years, she has conducted more than 80 evaluations in
education and nonprofit organizations, and has worked
extensively with U.S. schools and districts in both
internal and external evaluator roles. Drawing on this
evaluation practice, she has authored/co-authored
numerous books and articles including, Evaluation
Strategies for Communicating and Reporting, 2nd
edition (Torres, Preskill, & Piontek, 2005), and
Evaluative Inquiry for Learning in Organizations
(Preskill & Torres, 1999). She has served on the AEA
Board, taught graduate research and evaluation courses,
and is a sought after workshop facilitator on various
topics related to evaluation practice.
Offered (Two Rotations of the Same Content - Do not register
for both):
-
Monday, June 15, 9:25 – 12:45 (20 minute break
within)
-
Tuesday, June 16, 9:25 – 12:45 (20 minute break
within)
Offering 15: Using Social Network Analysis in Program
Evaluation
Level:
Intermediate
Description:
An important aspect of many evaluations is understanding
the development of key relationships, and the flow of
resources between groups or individuals. Social network
methods quantitatively assess the structure of a group
or community, and the relationships between individuals
or organizations. Within the evaluation community, there
is growing recognition of how social network analysis
techniques can add important robustness to a
comprehensive evaluation design. This session will
provide an overview of the foundations of social network
methods and essential theories and core characteristics
and components. Through lecture, evaluation examples,
discussion and in-session exercises, we will review the
development of appropriate evaluation questions and
approaches where social network methods may be used.
You will learn:
-
The basics of social network measurement theory and measurement
-
How to combine social network methods with other data collection
and analytical methods in program evaluation
-
Measurement issues and pitfalls in social network measurement
Audience:
Attendees with an intermediate background in evaluation
who would like to develop an understanding of social
network methods.
Julia Melkers
is Associate Professor of Public Policy at the Georgia
Institute of Technology. She has an extensive
background in performance measurement and program
evaluation of publicly funded programs and teaches
graduate-level program evaluation and survey research
methods. Most recently, she has been engaged in a number
of multi-year evaluations using social network analysis
in the assessment of collaborative patterns and
knowledge exchange within the context of academic
science. In her own research, she is principal
investigator of a large national study of the social and
professional networks of women in science. She is
co-editor of the book R&D Evaluation Methods and
is currently working on a book reviewing R&D performance
measurement approaches in other countries.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Monday, June 15, 2:30 – 4:00 PM
-
Tuesday, June 16, 2:30 – 4:00 PM
Offering 16:
Taking it Global: Tips for International Evaluation
(new!)
Level: Beginner
Description:
This
session offers practical considerations for those interested
in or preparing to work in evaluation overseas. It is
organized in three segments. The opening segment provides an
overview of the organizational context for international
evaluations, highlighting key entities focused on
strengthening, sharing, and supporting evaluation theory and
practice around the world. The next segment focuses on the
stakeholder environment, with the role of donors, host
governments, local evaluation associations, and civil
society contrasted with that of major players in U.S.-based
evaluations. The discussion on how to reconcile the
different needs and expectations of these stakeholders sets
the stage for the final segment, which presents a case study
illustrating common challenges encountered in the field.
Participants will work in small groups to consider such
issues as credibility, evaluation capacity building,
cultural competence, and local ethical standards. Throughout
the session, participants' own evaluation experience in the
domestic arena will serve as a catalyst for discussion and
concrete guidance.
Audience:
Attendees with a basic background in evaluation who are
currently working in, or considering working in,
international contexts
Donna Podems,
is a senior evaluation facilitator for Macro
International. She has practical evaluation experience
in the United States as well as numerous countries in
Africa, Asia, and Latin America. An evaluation
generalist, she has experience in evaluating a wide
range of projects, including gender, women’s
empowerment, HIV/AIDS, and youth interventions, among
others. Her doctorate in interdisciplinary studies,
focused on Program Evaluation and Organizational
Development, is from the Union Institute and University
and her Masters degree in Public Administration and BA
in Political Science are from The American University.
Offered:
Offering 17: Gender Issues in Global Evaluation
Level:
Intermediate
Description: A brief review of the various gender and
feminist approaches and their dissimilar histories,
contexts, and critiques will set the stage for participant
discussion and practical application. Participants will
engage in facilitator led discussions regarding the
different, appropriate, and practical applications of gender
and feminist evaluation, and how these approaches have the
potential to enhance various evaluation contexts throughout
the ‘developed’ and ‘developing’ world. Participants will
demystify feminist and gender evaluation by examining case
studies from Africa and Asia in various fields including
HIV/AIDS, human rights, public private partnerships, and
environment. Through small group work participants will
apply elements of both approaches resulting in an advanced
understanding of how feminist and gender approaches can
enhance evaluation and the evaluator in any context.
Audience:
Attendees with a basic understanding of
evaluation and working in all contexts although examples are
drawn from international development.
Donna
Podems is a senior evaluation facilitator for Macro
International. She has conducted trainings, developed M&E
systems, and implemented evaluations for government agencies
and grassroots and nongovernmental organizations. She has
practical evaluation experience in the United States,
Botswana, Cameroon, Ghana, Kenya, Madagascar, Namibia,
Republic of South Africa, Somaliland, Swaziland, Uganda,
Zimbabwe, Bosnia, Guatemala, Panama, Peru, Belize,
Nicaragua, India, Indonesia, Pakistan, Sri Lanka, Thailand
and Vietnam. An evaluation generalist, she has experience in
researching, evaluating and/or developing M&E systems for a
wide range of projects. Topics have included gender, women’s
empowerment, HIV/AIDS, youth interventions, natural resource
management, education, capacity building, human rights,
public private partnerships, and community needs. Her
doctorate in interdisciplinary studies, focused on Program
Evaluation and Organizational Development, is from the Union
Institute and University and her Masters degree in Public
Administration and BA in Political Science are from The
American University.
Offered:
Offering 18:
Making the Most Out of Multi-site Evaluations: How
Involving People Can Make Sense
(new!)
Level:
Intermediate
Description: Large-scale evaluations that involve a
number of different sites are increasingly common. While
utilization-focused evaluation can inform our practice
with a small group of primary intended users, what about
the many other potential users, including the
individuals at multiple project sites who take an active
part in multi-site evaluations? Based on a research
study of four large-scale evaluations, this session will
teach participants what we have learned about how to
take advantage of involvement to increase the potential
impact of multi-site studies. We will begin by reviewing
the principles of utilization-focused evaluation, then
examine the ways that evaluation involvement, use, and
influence can differ when people are engaged across
multiple sites. Participants are encouraged to come with
a specific multi-site evaluation in mind.
Audience: People with a solid background in
evaluation and experience working in the context of
multi-site evaluations.
Jean
A. King and
Frances Lawrenz are professors in the College of
Education and Human Development at the University of
Minnesota. King teaches in the Evaluation Studies
Program in Educational Policy and Administration and
serves as the Director of Graduate Studies for the
University-wide Program Evaluation minor. Lawrenz is the
University’s Associate Vice-President for Research and
teaches in Educational Psychology. Both have received
the American Evaluation Association’s Myrdal Award for
Evaluation Practice--King for her work with
participatory evaluation and Lawrenz for her work
evaluating science, technology, engineering, and
mathematics (STEM) programs. They will be assisted by
graduate students who were part of the research team on
which the workshop is based
Offered:
Level:
All
Description:
This workshop is designed to teach participants the
Essential Competencies for Program Evaluators, a set of
knowledge, skills, and attitudes in six categories. The
session will begin with the analysis of program
evaluation vignettes representing diverse areas of
practice to show both the common competencies across
settings and those unique to specific contents or
contexts. Following a brief history of how the
competencies were developed, the session will then
examine the competencies in all six categories:
professional practice, systematic inquiry, situational
analysis, project management, reflective practice, and
interpersonal skills. This discussion, which builds on
the continuum of interpersonal evaluation practice, will
ground participants in the competencies’ content and
allow people to ask questions as they think about their
own evaluation practice. After a short break,
participants will develop concept maps to explore how
the competencies make sense in their roles or content
areas. Comparative discussion will further illuminate
the competencies, and then participants will complete a
self-assessment tool and discuss how to set priorities
and action steps for professional development. Most of
the session will consist of interactive exercises with
just enough lecture to frame the discussion.
Audience:
All evaluators, and those thinking about entering the
field of evaluation, working in any context
Jean A. King
is a Professor in the Department of Educational Policy
and Administration at the University of Minnesota where
she serves as the Director of Graduate Studies and
Coordinator of the Evaluation Studies Program. She holds
an M.S. and Ph.D. from Cornell University and prior to
her graduate study taught middle school English for a
number of years. In 1995, her work using participatory
evaluation methods resulted in the Myrdal Award for
Evaluation Practice from the American Evaluation
Association, and in 1999, she was awarded the
Association’s Robert Ingle Award for Extraordinary
Service. Professor King received the University of
Minnesota, College of Education and Human Development’s
Beck Award for Outstanding Instruction in 1999, the
College’s 2002 Distinguished Teaching Award, and the
2005 Community Service Award. She is the author of
numerous articles and chapters and, with Laurie Stevahn,
continues writing a book on interactive evaluation
practice.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Monday, June 15, 9:25 – 12:45 (20 minute break
within)
-
Tuesday, June 16, 9:25 – 12:45 (20 minute break
within)
Offering 20: Every Picture Tells a Story: Flow Charts,
Logic Models, LogFrames, Etc. What They are and When to
use Them
Level: Advanced
Beginner Description:
A
host of visual aids are in use in planning and
evaluation. This session will introduce you to some of
the most popular ones—with an emphasis on flow charts,
logic models, project network diagrams, and logframes.
We’ll review the content and format of each one and the
compare and contrast their uses so that you can better
match specific tools to specific program needs. We’ll
review simple ways to construct each type of tool and
work through some simple cases both as illustrations and
as a way for you to practice the principles presented in
the session.
Audience: Assumes prior familiarity with evaluation terminology and
some experience in constructing logic models.
Thomas Chapel
is a Senior Evaluation Scientist in the Office of
Workforce and Career Development, at the Centers for
Disease Control and Prevention. He serves as a central
resource on strategic planning and program evaluation
for CDC programs and their partners. Before joining CDC,
Mr. Chapel was Vice-President of the Atlanta office of
Macro International where he directed and managed
projects in program evaluation, strategic planning, and
evaluation design for public and non-profit
organizations. He is a frequent presenter at national
meetings, a frequent contributor to edited volumes and
monographs on evaluation, and has facilitated or served
on numerous expert panels on public health and
evaluation topics. Mr. Chapel is active nationally and
locally in the American Evaluation Association (AEA),
currently as past-chair of the Membership Committee and
convener of AEA’s Local Affiliate Collaborative. Mr.
Chapel holds a BA degree from Johns Hopkins University
and MA in public policy and MBA degrees from the
University of Minnesota.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Monday,
June 15, 9:25 – 12:45 (20 minute break within)
-
Wednesday,
June 17, 9:25 – 12:45 (20 minute break within)
Offering
21: Evaluating Organizational Collaboration
Level:
Intermediate
Description:
“Collaboration” is a misunderstood, under-empiricized and
un-operationalized construct. Program and organizational
stakeholders looking to do and be collaborative struggle to
identify, practice and evaluate it with efficacy. This
workshop aims to increase participants’ capacity to
quantitatively and qualitatively examine the development of
inter-organizational partnerships. Together, we will review,
discuss, and try out specific tools for data collection,
analysis and reporting, and we will identify ways to use the
evaluation process to inform and improve collaborative
ventures. You will practice using assessment techniques that
are currently being used in the evaluation of PreK-16
educational reform initiatives and other grant-sponsored
endeavors including the Safe School/Healthy Student
initiative.
Audience:
Attendees with a basic understanding of organizational
change theory/systems theory and familiarity with mixed
methodological designs
Rebecca
Gajda
has been a facilitator of various workshops and courses for
adult learners for more than 10 years. She was a top-10
workshop presenter at Evaluation 2007, lauded for her
hands-on, accessible, and immediately useful content. As
Director of Research and Evaluation for a large-scale,
grant-funded school improvement initiative, she is currently
working collaboratively with organizational stakeholders to
examine the nature, characteristics and effects of
collaborative school structures on student and teacher
empowerment and performance. Dr. Gajda received her Ph.D.
from Colorado State University and is currently an assistant
professor at the University of Massachusetts Amherst.
Offered (Two Rotations of the
Same Content - Do not register for both):
-
Tuesday,
June 16, 9:25 – 12:45 (20 minute break within)
-
Wednesday, June 17,
9:25 – 12:45 (20 minute break within)
Offering 22: Enhanced Group Facilitation: Techniques and
Process
Level:
All
Description:
This popular and well-received workshop will familiarize
participants with a variety of group facilitation
techniques as well as the management of the facilitation
process. You will learn how to choose a facilitation
technique based on goals and objectives, anticipated
outcome, type and number of participants, and logistics.
Two to three facilitation techniques for generating
ideas and focusing thoughts, including item writing and
nominal group technique, will be explored in greater
detail. We will also cover variations on these
techniques and how they may be used for your
facilitation purposes. Finally, participants will learn
more about the different roles and responsibilities they
may have in group facilitation (there are more than you
think!), and how these roles intersect with the tasks
inherent in planning and managing a group facilitation
experience. Job aides and reference lists will be
provided.
Audience: Attendees working in any context who work with, or expect
to be working with, client groups of any size.
Jennifer Deweyis
a Technical Director with the research and evaluation
professional services firm of Macro International Inc.
Jennifer oversees a program of ongoing training and
technical assistance to local evaluation teams for the
national evaluation of a federally-funded children’s
mental health services program under the Substance Abuse
and Mental Health Services Administration (SAMHSA).
Prior positions include Director of Internal Evaluation
at Learning Point Associates, Senior Consultant at
Andersen, and post-doctoral scholar at the Center for
Prevention Research at the University of Kentucky.
Jennifer holds a doctorate in Applied Experimental
Psychology with a specialization in program evaluation.
Her knowledge and skills encompass project management,
proposal development, methodological and statistical
design, qualitative and quantitative analysis, needs
assessment, survey development, telephone and in-person
interviews, and group facilitation. Jennifer is a 2007
and 2008 Malcolm Baldrige National Quality Award
Examiner. She has published in the Journal of Primary
Prevention, American Journal of Evaluation,
Advances in Developing Human Resources, and has
made over 40 professional conference presentations.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Monday,
June 15, 9:25 – 12:45 (20 minute break within)
-
Wednesday,
June 17, 9:25 – 12:45 (20 minute break within)
Offering 23: Using the Guiding Principles to Improve
Your Evaluation Practice
Level:
Beginner to Intermediate
Description:
The
Guiding Principles
for Evaluators
focus
on five areas of evaluation practice: systematic
inquiry, competence, integrity and honesty, respect for
people, and responsibilities for general and public
welfare.
The Principles guide the professional practice of
evaluators, and inform evaluation clients and the
general public about the principles they can expect to
be upheld by professional practitioners.
This session will share ways to use the Principles
to
improve the ways in which you plan for and conduct
evaluations and work with stakeholders and clients.
After a brief presentation that introduces the Principles, participants will work together in small
groups to discuss the Principles as they relate
to a topical case study. Through case explorations,
lecture and small and large group discussions, you will
gain a deeper understanding of the practical
applications of the
Principles. The
workshop will also introduce resources—print, web-based
and collegial networks—that evaluators can consult to
handle professional dilemmas that arise in their
practice. You will receive copies of the workshop
presentation, the case study, the
Principles
in
full and abbreviated brochure format, and a list of
resources for more information and consultation.
Audience:
Evaluators
and commissioners of evaluation working in any context
Leslie Goodyear
is Program Officer in the Division of Research on
Learning at the National Science Foundation. In addition
to grant making, she coordinates evaluation for the
division’s programs. As a program evaluator and
researcher, Dr. Goodyear has worked with programs
focused on HIV/AIDS Prevention; Out-of-School Time;
Youth Engagement and Youth Media; Educational Research;
and STEM Education. Leslie is a past Chair of the AEA
Ethics Committee and an AEA Board member. She earned her
M.S. and Ph.D. in Human Service Studies, with focus on
Program Evaluation, from Cornell University.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Monday, June 15, 2:30 – 4:00 PM
-
Tuesday, June 16, 2:30 – 4:00 PM
Offering 24:
Collaborative Evaluations: A Step-by-Step Model for the
Evaluator
Level:
Beginner
Description:
Do you
want to engage and succeed in collaborative evaluations?
Using clear and simple language, Dr. Liliana Rodríguez
will outline key concepts and effective tools to help
master the mechanics of collaboration in the evaluation
environment. Specifically, you will explore how to apply
the Model for Collaborative Evaluations (MCE) to
real-life evaluations, with a special emphasis on those
factors that facilitate and inhibit stakeholders'
participation. The presenter shares her experience and
insights regarding this subject in a precise and easy to
understand fashion, so that participants can use the
information learned from this workshop immediately.
Using discussion, demonstration, hands-on exercises and
small group work, participants will apply the learned
techniques to specific situations. In addition,
participants are encouraged to bring actual evaluation
examples, present scenarios and/or specific problem
areas for discussion.
-
Understand the factors that influence the success of
collaboration in evaluations,
-
Capitalize on others' strengths to encourage
feedback, clarify interpretations, and resolve
misunderstandings,
-
Select the methods and tools to facilitate
collaborative evaluations and build collaborative
relationships.
Audience:
Evaluators
working in any context.
Liliana Rodriguez-Campos is an assistant professor
in the educational measurement and research department
at the University of South Florida’s College of
Education. She has received with the American Evaluation
Association's Marcia Guttentag Promising New Evaluator
Award and served as Program Chair for AEA's
Collaborative, Participatory, and Empowerment Evaluation
Topical Interest Group. She is the author of
Collaborative
Evaluations (Lumina Press), a highly comprehensive
and easy-to-follow book for those evaluators who want to
engage and succeed in collaborative evaluations.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Tuesday,
June 16, 9:25 – 12:45 (20 minute break within)
-
Wednesday,
June 17, 9:25 – 12:45 (20 minute break within)
Offering 25:
Evaluaciones Colaborativas: Un Modelo Paso a Paso para
el Evaluador
Level:
Beginner
Description:
¿Desea usted tener éxito en las
evaluaciones colaborativas? Usando un lenguaje
claro y simple, la Dra. Liliana Rodríguez
presenta las herramientas más eficaces para
entender los factores que afectan la
colaboración en el ambiente de evaluación.
Específicamente, usted explorará cómo aplicar el
Modelo para Evaluaciones Colaborativas (MEC) en
casos específicos de la vida real, con un
énfasis especial en los factores que facilitan u
obstruyen la participación de los interesados.
La presentadora comparte su experiencia y
conocimiento con respecto a este tema de una
forma precisa y fácil de entender, de modo que
los participantes puedan utilizar inmediatamente
la información aprendida en este taller.
Mediante una discusión altamente interactiva,
demostraciones, ejercicios prácticos y grupos
pequeños, los participantes aplicarán las
técnicas aprendidas a situaciones específicas.
Además, los participantes están invitados a
traer ejemplos de sus propias evaluaciones,
presentar escenarios y/o áreas problemáticas
específicas para ser discutidos en este taller.
Usted aprenderá a:
-
Entender los factores que influencian el
éxito de las evaluaciones colaborativas,
-
Tomar en cuenta las fortalezas de otros para
fomentar la retroalimentación, clarificar
las interpretaciones y solucionar los
malentendidos,
-
Seleccionar los métodos y/o
las herramientas apropiadas para realizar
evaluaciones colaborativas mediante el
desarrollo de relaciones colaborativas.
Audience:
This
offering will be conducted in Spanish and is offered for
evaluators working in any context with Spanish fluency.
Liliana Rodríguez-Campos
dicta la Cátedra de Evaluación en el Departamento de
Medición e Investigación en la Universidad de South
Florida.
Ella recibió el Reconocimiento Marcia Guttentag de la
Asociación Americana de Evaluación otorgado a un Nuevo
Evaluador Prometedor y es la Directora del Programa del
Tópico de Interés Grupal de Colaboración, Participación
y Empowerment en dicha Asociación. Ella es la autora de
Evaluaciones Colaborativas (Llumina Press), un
libro muy completo y útil para aquellos evaluadores que
desean tener éxito en las evaluaciones colaborativas.
Offered:
Offering 26: Case Study Methods for Evaluators
Level: Beginner
Description: Case Study Methods allow evaluators
to approach program assessment from a powerful and
flexible design palette. While often heavily steeped in
the use of qualitative methods, case studies also may
include the use of quantitative data. The approach is
particularly rich for tinting and shading the effects of
programs as well as investigating important program
questions in depth.
This interactive,
three-hour session will provide participants with an
overview and examples of case study research methods as
they apply to evaluation settings. Through the
development and expansion of sample case studies, by the
end of the session participants will:
-
Comprehend
the role of case study methods within the context of
other evaluation approaches
-
Be able to describe the elements of case study
research and identify the major strengths and
weaknesses of case study methods;
-
Understand the sequential, operational guidelines
for implementing case study research
-
Review techniques for establishing the validity and
reliability of case study data
-
Strengthen data gathering and analysis skills
through use of techniques common to case study
research
Audience: Attendees working in any context
Rita O’Sullivan
is Associate Professor of Evaluation and Assessment at
the University of North Carolina at Chapel Hill where
she teaches graduate courses in Educational Program
Evaluation, Case Study Methods, Research Design,
Measurement, and Statistics. She is also Executive
Director of Evaluation, Assessment, and Policy
Connections (EvAP), a unit she founded within the UNC
School of Education that conducts local, state, and
national evaluations. Dr. O’Sullivan has specialized in
developing collaborative evaluation techniques that
enhance evaluation capacity and utilization among
educators and public service providers. She is senior
author of Programs for At-Risk Student: A Guide to
Evaluation (Corwin Press, 1993) and wrote Practicing Evaluation: A Collaborative Approach
(Sage) in 2004.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Monday, June 15, 9:25 – 12:45 (20 minute break
within)
-
Tuesday, June 16, 9:25 – 12:45 (20 minute break
within)
Offering 27:
Translation of Evaluation Capacity Building Strategies
to Community Based Organizations Conducting Health
Promotion Activities: Tools, Tips and Lessons Learned
Level:
Intermediate
Description:
National public health disparities often require local,
community-based approaches to influence behaviors and
policies. While community-based organizations serve as
catalysts for prevention and health promotion
activities, many do not consistently practice program
evaluation due to the challenges of limited time, staff
and measurement skills. This session will describe how
to 1) develop an audience targeted evaluation capacity
building plan, 2) choose appropriate tools and mediums
for evaluation capacity building, 3) engage in
bi-directional learning on how to couple community
engagement and evaluation approaches and 4) navigate
challenges in evaluation capacity building partnerships.
Audience:
Attendees working with community-based public health
initiatives to conduct evaluations, who offer evaluation
capacity building or technical assistance in program
assessment.
Tabia Henry Akintobi
is a Research Assistant Professor at the Morehouse
School of Medicine and Director of Evaluation for its
Prevention Research Center. She evaluates The Atlanta
Clinical and Translational Service Institute Community
Engagement and Research Program designed, in part, to
engage academicians and community is collaborative
research. She provides evaluation or capacity building
for programs addressing infrastructure development,
health outcomes and service delivery in the areas of
maternal and child health, substance abuse, mental
health, HIV/AIDS and teen pregnancy. She led assessment
of the Pfizer Foundation Southern HIV/AIDS Prevention
Initiative and the Southeastern Collaborative Center of
Excellence for the Elimination of Health Disparities.
She is Chairperson for The
National Prevention Research Center
Evaluation Advisory Committee.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Monday, June 15, 2:30 – 4:00 PM
-
Tuesday, June 16, 2:30 – 4:00 PM
Offering 28: Evaluation 2.0: Measuring Connected
Communications and Online Identity
Level: Beginner
Description:
Are you working with programs that have
started a blog, used RSS feeds, updated their website,
or employed social media to spread the word about their
services, increase name recognition, change behaviors,
or build community? Have you been asked to evaluate
connected communications or online identity but aren’t
sure where to start? This workshop will examine what to
measure, how to measure it, and what tools are available
to assist you. We’ll identify the ways in which new
media has changed the way we communicate and the
implications for evaluation. We’ll provide multiple
examples of how to source data, determine baselines,
track change over time, and apply both qualitative and
quantitative techniques to measure outcomes. Finally,
we’ll examine the range of tools available and provide
specific examples of how to choose those that best match
your data collection, analysis, and reporting needs. You
will leave with multiple take-home resources including
examples, checklists, worksheets, and tool comparisons.
Audience: Attendees working in any context
Susan Kistler
is the Executive Director of the American Evaluation
Association and owner of iMeasure Media. She has taught
statistics, research methods, and evaluation at the
university level and is an experienced trainer for
local, regional, and national audiences. She has built
upon her traditional evaluation and qualitative and
quantitative analysis background, as well as an ongoing
commitment to employing methods that are both useful and
feasible, to develop and implement measures for emerging
technologies that allow for comparisons over time and
across groups.
LaMarcus Bolton is
the Technology Director for the American
Evaluation Association
and the Information Officer for the National Association
of African Americans in Human Resources of Greater St.
Louis. He brings to the session his background as a
researcher and years of applied organizational and
consulting experience.
Offered:
Offering 29: Conducting and Using Success Stories for
Capacity Building
Level:
Intermediate
Description:
In order to build program capacity a programs “success”
must be told at many levels. In addition, impacts of
prevention programs may not be able to be demonstrated
for several years therefore communicating success during
the various life stages of a program is important for
long term sustainability. The presenters will use their
experience with 13 national oral health grantees to
demonstrate how to use success stories to build both
program and evaluation capacity. The session will be a
practical and hands on session enabling attendees to
begin writing their own success stories. This is an
expanded version of last year’s session of the same
title with more time for practicing practical
applications for use in your own practice. Attendees
will receive the newly developed workbook: Impact and
Value: Telling Your Program’s Story for use during
the class and to take home for reference.
Audience:
Attendees working in any context with a working
knowledge of both evaluation and qualitative inquiry
René Lavinghouze
is with the Division of Oral Health at CDC where she
leads a multi-site, cluster evaluation designed to
assess infrastructure development. Rene has over 15
years experience with CDC and in the private sector. She
is Co-chair of AEA’s TIG for Cluster, Multi-site/level
evaluations and serves on the communications team for
the local evaluation affiliate, AaEA. Ann Price is president of Community Evaluation Solutions, Inc and
has
over 20 years experience in both treatment and
prevention. She has conducted evaluations in many areas
including intimate partner violence, mental health,
substance abuse, tobacco prevention and oral
health. Prior to CES, Dr. Price was a Senior Data
Analyst at ORC Macro on a multi-site national child
mental health evaluation.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Monday,
June 23, 9:25 – 12:45 (20 minute break within)
-
Wednesday,
June 25, 9:25 – 12:45 (20 minute break within)
Offering 30: A 4-Stage Model of Participatory Evaluation
for Evaluating Community-based Programs
(new!)
Level: Beginner
Description:
Evaluation of community-based programs often requires
the identification and measurement of activities that
are implemented by internal and external actors using
diverse strategies to affect short- and long-term
outcomes. Participatory evaluation (PE) is a
useful methodology for documenting the outcomes of such
diffuse work. Community based organizations (CBOs) can
expect PE to provide added benefits, such as increasing
stakeholder knowledge and commitment, engaging
wide-ranging perspectives, and improving prospects for
sustainability. In PE, the evaluators are a team
made up of facilitators and project leaders that engages
in an iterative process of problem definition and
strategy development. PE allows leaders to extract and
utilize knowledge created in the course of the work.
This workshop will cover a four-step approach to PE in a
community context: (1) Front End, where the PE
team clarifies the scope and focus of the evaluation and
it’s role in context; (2) Formative Evaluation,
where the PE team defines baselines and sets goals; (3)
Process Evaluation, where, the PE team collects
regular updates of progress and sharpens understanding
of the means used to achieve goals; and (4) Summative
Evaluation, where the PE team reflects on progress
to date and sets priorities for future work.
Audience:
Evaluators working with community-based programs
Janet Rechtman
is a Senior Fellow at the Fanning Institute at the
University of Georgia. With a background in strategic
planning, organizational development, marketing, and
leadership development, Janet has nearly thirty years of
experience in providing technical assistance, training
and facilitation to nonprofits and community-based
collaborations. Details of her presentation will also be
available in the May 2009 of Foundation Review.
Courtney Tobin is a Public Service Associate at the
Fanning Institute, where she focuses on community
economic development initiatives. She designs and
executes public participatory processes for communities,
diverse stakeholder groups and public agency rule-making
initiatives. Ms Tobin provides technical assistance and
creates educational opportunities for neighborhoods and
communities on reuse opportunities, including brownfield
redevelopment and related economic development
initiatives.
Offered:
Offering 31: (Kostilnik) Evaluation Techniques: Drawing
from Organizational Development
(new!)
Level: Beginner
Description:
The
field of Organizational Development can supply the
evaluation practitioner with theories, contexts,
frameworks, and skills that enhance and improve the
focus of evaluative inquiry. This presentation will
provide participants with a basic introduction to
Organizational Development and how this field can inform
program evaluation while demonstrating organizational
effectiveness and improvement. The class will touch on a
variety of qualitative OD techniques such as narrative
inquiry, negotiation, motivation, and effective
communication. A case study will be used both as an
illustration and as an opportunity to apply the content
of the class.
Audience:
Evaluators working in any context.
Catherine D. Kostilnik is the President of the
Center for Community Studies, Inc. She has over 18 years
experience designing and conducting a variety of program
evaluations for national, state, and local community
based organizations. She is a Certified Health Education
Specialist and a Certified Family Life Educator. She is
a member of the American Evaluation Association, the
Atlanta Area Evaluation Association, and the Southern
Georgia Evaluation Association. In addition to program
evaluation, she writes grants for a variety of
organizations focused upon ATOD prevention, school
health, rural health, academic achievement, and youth
development.
Offered:
Offering 32: Improving Survey Quality: Assessing and
Increasing Survey Reliability and Validity
Level:
Intermediate
Description:
Develop
higher-quality surveys! This workshop is designed to teach
participants how to improve survey quality, thus increasing
their utility of and confidence in the data they collect. We
will look at surveys to elicit factual information as well
as ones that ask about subjective and abstract concepts.
Through the use of hands-on activities, mini-lectures, and
demonstrations participants will understand what is meant by
reliability and validity with respect to surveys and will
learn ways to improve each during the survey design phase
for both types of surveys. Next, using a case example and
SPSS we will explore ways to use pilot test responses to
assess the reliability of subjective / abstract survey
constructs by conducting confirmatory factor analysis and
calculating Cronbach’s alpha. We will work together to
understand what our findings tell us as well as what they
don’t tell us, and consider other ways to assess survey
quality. Last we will explore the types of validity
associated with surveys and ways to assess the various forms
of validity again using our case example.
You will receive a workbook and
SPSS screenshots to help you remember how to perform many of
the computations we will perform. Participants will
be surprised by how easy it is to improve survey quality
through a few easy to implement steps!
Audience:
Attendees working in any context with a basic background in
survey development and an understanding of factor analysis.
Amy A.
Germuth
earned
her PhD in Evaluation, Measurement and Psychology from UNC
Chapel Hill and a certificate in survey methodology via a
joint program through UNC-CH, UMD, Westat and RTI. She is a
founding partner of Compass Consulting Group, LLC,
a private evaluation
consulting firm that conducts evaluations at the local,
state, and national levels. . As part of Compass she has
evaluated numerous initiatives, including health prevention
and outreach programs, Math Science Partnerships, K-12
science outreach programs, and workforce development
initiatives, and has worked with a variety of organizations
including the US Education Department, Department of
Health and Human Services, Westat, Georgia Tech., Virginia
Tech., University of North Carolina, the New York State
Education Department, multiple NC Childhood Education
Partnerships, and Hawaii’s Kamehameha Schools. As part of
her evaluation work she has developed and guided large-scale
survey initiatives. Dr. Germuth teaches evaluation and
instrument development as part of Duke University’s
Certificate Program in Non-Profit Management.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Tuesday, June 16, 9:25 – 12:45 (20 minute break
within)
-
Wednesday,
June 17, 9:25 – 12:45 (20 minute break within)
Offering 33: Technology and Tools for Evaluation
(new!)
Level: Beginner
Description:
The field of evaluation has
many opportunities for using a variety of tools and
technologies. From PDA units to scannable forms, this
session will focus on some of the technologies used in
evaluation. Participants will learn approaches for
planning, implementing, and managing Information
Technologies (IT) both in the field and back at home
base. Participants will be encouraged to share their
experience and resources regarding use of technology for
evaluation.
Audience:
Those working in any context
Carlyle Bruce
President of Wellsys Corporation is a clinical-community
psychologist who has consulted with, developed and
evaluated programs for over 20 years. His creativity,
energy, and technical expertise have been utilized by
his clients to help them achieve their goals. In
addition, his broad systems-oriented perspectives have
been distinguishing qualities in his work. Dr. Bruce’s
consulting and evaluation experience includes numerous
local, state, and federally funded programs and agencies
and programs funded by private foundations. This scope
includes the fields of education, health, human
services, youth development, and justice. As a social
researcher, he has been especially involved with
evaluating system and community change initiatives,
particularly those providing social services. Across
these areas, Dr. Bruce has continued to focus on
applications of technology to research and
program/organization development. He has given
presentations and workshops on topics of program
evaluation, substance abuse prevention, and
diversity. Dr. Bruce has broad training and experience
in clinical child and family psychology, as well as
community-organizational psychology.
Offered:
Offering 34: Project Management Fundamentals – The Key
to Implementing Evaluation Projects
(new!)
Level: Intermediate
Description:
This highly experiential session is focused on how to implement an evaluation effort through a structured
project management methodology. The workshop will focus
on initiating, planning, executing, and monitoring and
controlling an evaluation project. Participants will
gain an understanding of how the project management
process can be used effectively to add value to
evaluation projects, and will also gain an understanding
of how to generate a project plan and manage an
evaluation project. At the conclusion of the session,
participants will be able to:
-
Describe the documents that define an evaluation
“project plan”
-
Write a project charter (mission statement) for an
evaluation.
-
Develop a Project scope statement for an evaluation
project
-
Interpret a graphic picture of a project via a
network diagram and Gantt chart.
-
Generate a project schedule and describe the project
critical path.
Audience:
Participants with responsibilities for evaluation
implementation, as well as individuals with oversight
responsibilities for evaluation projects. Participants
already participating in program evaluation will benefit
most from the session.
Richard
H. Deane
has
over 30 years experience in the application of
structured project management techniques in both the
private and public health sectors. He has taught and won
numerous teaching awards at Purdue University, Georgia
Tech and Georgia State University. Richard has been a
highly acclaimed instructor at the CDC University for
many years, and his project management consulting
assignments over the past 25 years have included work
with various agencies within the CDC, state public
health departments, private health organizations,
federal agencies, and numerous private sector
corporations.
Offered:
Offering 35: Accountability for Health Promotion
Programs-Practical Strategies and Lessons Learned
Level:
Beginner
Description:
Over the past decade or more, policy makers and others
have called for greater accountability in the public
sector. With ever-decreasing resources for public
health, decision-makers want specific types of
information to assess the “value” of continued
investment in disease prevention and health promotion
programs. For accountability purposes, how do we assess
whether our public health programs are effective and
result in progress toward program goals? This session
will describe several strategies to assess program
accountability including performance measurement, expert
review and appraisal, and questions-oriented approaches.
An emphasis will be given toward application and
constructive use of these strategies for program
improvement purposes. Practical examples demonstrating
these approaches will be shared and potential,
real-world challenges and lessons learned discussed.
Audience:
Those working in public health contexts.
Amy DeGroff
is an evaluator working in the Division of Cancer
Prevention and Control with the Centers for Disease
Control and Prevention. Ms. DeGroff conducts qualitative
research and evaluation studies and currently oversees a
large scale multiple case study of a colorectal cancer
screening program.
Michael Schooley
is chief of the Applied Research and Evaluation Branch,
Division for Heart Disease and Stroke Prevention with
the Centers for Disease Control and Prevention. He has
contributed to the development and implementation of
numerous evaluation, applied research and surveillance
projects, publications and presentations.
Offered (Two Rotations of the Same Content - Do not register
for both):
-
Monday, June 15, 2:30 – 4:00 PM
-
Tuesday, June 16, 2:30 – 4:00 PM
Offering 36: (Dawkins) Evaluability Assessments:
Achieving Better Evaluations
(new!)
Level:
Beginner
Description:
Though
rigorous evaluation is a valuable method, in practice it
also proves costly and time-consuming. Further, rigorous
evaluation is not an appropriate fit for every
initiative. Evaluability assessments (EAs) offer a
cost-effective technique to help guide evaluation
choices. Developed by Joseph Wholey and colleagues 3
decades ago (1979), EA received significant attention at
the time but subsequently displayed diminished activity
(Rog 1985; Trevisan 2007). But EAs help answer critical
questions evaluators continue to face in practice: Is a
program or policy ready for rigorous evaluation? What
are viable options for evaluating a particular
initiative? EAs involve clarifying program goals and
design, finding out stakeholders’ views on important
issues, and exploring the reality of a given initiative.
In short, EAs are a valuable and important tool to have
in an evaluator’s toolbox. This workshop will provide
participants an understanding of EAs and how they can be
applied in their own practice.
Audience:
Those working in any context.
Nicola Dawkins received her PhD in sociology and Masters in Public
Health from Emory University. She serves as a Senior
Technical Director at Macro International Inc. where she
designs and implements numerous research and evaluation
studies. Among these are the Coordinating Center for the
Early Assessment of Programs and Policies to Prevent
Childhood Obesity—a project that employed a cluster
evaluability assessment methodology to examine multiple
initiatives. Nicola also oversees other individual
evaluability assessment and evaluation projects as well.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Monday,
June 15, 9:25 – 12:45 (20 minute break within)
-
Tuesday, June 16, 9:25 – 12:45 (20 minute break
within)
Offering 38:
Process Evaluation
(new!)
Level:
Beginner
Description:
Process evaluation examines
activities and operations of a program or intervention.
Process evaluations typically ask questions such as:
-
Who participated in
the program? How fully did they participate?
-
Was the program
implemented as intended?
-
What external
factors facilitated or inhibited program
implementation?
-
Was greater
participation in certain program components
associated with intended outcomes?
Process
evaluation is useful for a variety of purposes,
including program improvement, accountability,
identifying critical program components, and
interpretation of outcome evaluation findings. In this
session we will define process evaluation and key
related constructs (e.g., reach, fidelity, context,
dose). We will discuss challenges and provide practical
suggestions for identifying appropriate process
evaluation questions and measuring key constructs. We
will also discuss use of process evaluation findings at
different program stages, including how process and
outcome evaluation fit together.
Audience:
Attendees working in any context who are new to process
evaluation
Michelle Crozier Kegler
is an Associate Professor in the Department of
Behavioral Sciences and Health Education at the Rollins
School of Public Health of Emory University. She is also
Deputy Director of the Emory Prevention Research Center
and Co-Director of its evaluation core. She has directed
numerous evaluation projects primarily within the
context of community partnerships and community-based
chronic disease prevention. Dr. Kegler teaches
evaluation at the Rollins School of Public Health and
regularly conducts workshops on evaluation to a variety
of audiences. Sally Honeycutt joined the Emory
Prevention Research Center in February 2007 as an
Evaluation Specialist. Before coming to Emory, Sally was
a member of the Surveillance and Evaluation Team for the
Steps to a HealthierUS Program at CDC. She has served as
a Maternal and Child Health Educator with the Peace
Corps and has experience coordinating health promotion
programs both domestically and internationally.
Offered:
Offering 39: An Introduction to Economic Evaluation
Level:
Intermediate
Economic evaluation refers to applied analytic methods
used to identify, measure, value, and compare the costs
and consequences of programs and interventions. This
course provides an overview of these methods, including
cost analysis, cost-effectiveness analysis (CEA), and
cost-benefit analysis (CBA) with an opportunity for
hands-on application of each. You will leave
understanding when and how to apply each method
appropriately in a range of evaluation contexts.
Audience:
Attendees with a basic background in statistical
methods and understanding of evaluation.
Phaedra S. Corso
is currently an Associate Professor in the Department of
Health Policy and Management at the University of
Georgia College of Public Health. Previously, she served
as the senior health economist in the National Center
for Injury Prevention and Control at CDC where she
worked for over a decade in the areas of economic
evaluation and decision analysis, publishing
numerous articles on the cost-effectiveness of
prevention interventions and co-editing a book on
prevention effectiveness methods in public health. She
holds a Master’s degree in public finance from the
University of Georgia and a Ph.D. in health policy and
decision sciences from Harvard University.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Monday,
June 15, 9:25 – 12:45 (20 minute break within)
-
Tuesday,
June 16, 9:25 – 12:45 (20 minute break within)
Offering 40: Utilization Focused Evaluation
Level:
Beginner
Description: Evaluations should be
useful, practical, accurate and ethical.
Utilization-focused Evaluation is a process that
meets these expectations and promotes use of
evaluation from beginning to end. With a focus
on carefully targeting and implementing
evaluations for increased utility, this approach
encourages situational responsiveness,
adaptability and creativity. This training is
aimed at building capacity to think
strategically about evaluation and increase
commitment to conducting high quality and useful
evaluations. Utlization-focused evaluation
focuses on the intended users of the evaluation
in the context of situational responsiveness
with the goal of methodological appropriateness.
An appropriate match between users and methods
should result in an evaluation that is useful,
practical, accurate, and ethical, the
characteristics of high quality evaluations
according to the profession's standards. With an
overall goal of teaching you the process of
Utilization-focused Evaluation, the session will
combine lectures with concrete examples and
interactive case analyses.
Audience:
Attendees working in any context.
Michael Quinn Patton
is an independent consultant and professor at
the Union Institute. An internationally known
expert on Utilization-focused Evaluation, this
workshop is based on the newly completed fourth
edition of his best-selling evaluation text, Utilization Focused Evaluation: The New Century
Text (SAGE).
Offered:
Offering 41: (Desenberg) Performance Manage-ment and
The Obama Administration
(new!)
Level: Intermediate
Description: This policy and best practices
workshop will outline the new directions and
enhancements the Obama Administration is bringing to
performance planning, measurement and management. As a
key component of the Recovery and Stimulus Legislation,
the new position of Chief Performance Officer, and the
accelerated emphasis on transparency and accountability
at all levels of Federal and State government, this area
has become central to all government programs. Jon
Desenberg, The Policy Director of The Performance
Institute in Washington DC, will outline existing best
practices and upcoming policy changes in the Government
Performance and Results Act, the new Performance
Evaluation legislation moving through Congress, and the
latest results from the Institute’s recent survey of
more than 500 private and public sector organizations.
As an interactive workshop, participants will be
encouraged to discuss and contribute their current and
future strategies on strategic planning, outcome focused
performance indicators and change management.
Audience:
Any government employee,
contractor, grantee or other interested attendee working
on performance management, planning, or reporting.
Jon Desenberg is the
Policy Director for The Performance Management and Human
Capital Management Divisions at The Performance
Institute. He is responsible for developing, structuring
and implementing creative solutions for their client’s
organizational and workforce planning needs. Jon brings
more than 19 years of public sector experience to his
current position, specifically in the fields of
performance management, strategic planning, and
knowledge management. As Managing Director, he
successfully led the United States General Services
Administration’s (GSA) Performance Management program,
which ultimately resulted in aligned goals and measures
cascading to all 13,000 employees.
Offered:
Tuesday,
June 16, 2:30 – 4:00 PM
Level:
All
Description:
Getting To Outcomes®:
Methods and Tools for
Planning, Evaluation, and Accountability
(GTO®)
was developed to help practitioners plan, implement, and
evaluate their programs to achieve results. GTO won the
AEA 2008 Outstanding Publication Award. GTO has been
downloaded more than 60,000 times from the RAND website
(available free at
http://www.rand.org/pubs/technical_reports/TR101/.
GTO is based on answering 10 accountability questions.
By answering the questions well, program developers
increase their probability of achieving outcomes and
demonstrate their accountability to stakeholders.
Answering the 10 questions involves a comprehensive
approach to results-based accountability: needs and
resource assessment; identifying goals, target
populations, desired outcomes (objectives); science and
best practices; fit; capacity; planning;
implementation/process evaluation; outcome evaluation;
continuous quality improvement; and sustainability.
CDC-funded research shows that use of GTO improved
individual prevention capacity and program performance.
GTO has been customized for several areas of public
health including: substance abuse prevention, underage
drinking prevention, positive youth development,
teen pregnancy prevention, and emergency preparedness.
It is currently being developed for Systems of Care in
Children’s Mental Health. This workshop will focus on
learning the basics of the GTO approach and present a
case study of how GTO is being used in a CDC-sponsored,
multi-site capacity building project intended to promote
the use of science-based approaches in teen pregnancy
prevention.
Audience:
Attendees working in public health who are new to the
GTO approach.
Abraham Wandersman is a Professor of Psychology
at the University of South Carolina-Columbia. He
received his doctorate from Cornell University. Dr.
Wandersman performs research and program evaluation on
citizen participation in community organizations and
coalitions and on interagency collaboration. He is a
co-author of Prevention Plus III and a co-editor of Empowerment Evaluation: Knowledge and Tools for Self
Assessment and Accountability and Empowerment
Evaluation: Principles in Practice.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Monday, June 15, 2:30 – 4:00 PM
-
Tuesday, June 16, 2:30 – 4:00 PM
Offering 43: Writing Questions that Elicit What You’re
Looking For
Level: All
Description: There is a lot to know when
constructing a productive survey. Participants will be
reminded of what they already know and will learn many
non-obvious but critical considerations as they draft a
survey relating to their field of interest. The
principles underlying valid, useful, and reliable
open-ended and fixed-choice questions will be discussed
as well as additional considerations involved in
aggregating questions on a survey.
We will explore (a)
when to use open- versus close-ended questions,
including issues of feasibility of analysis; (b) the
range of question types, such as yes/no, multiple
choice, scales, ranking, short answer, and factors that
influence selection; (c) question ordering and its
impact on response; and (d) careful wording to avoid
common question development pitfalls.
Audience:
Attendees working in any context.
Angelika Pohl is founder and President
of Atlanta-based Better Testing & Evaluations. Dr. Pohl
has extensive professional experience in education and
in all aspects of K-12 educational testing and
evaluation. After a career of college and
graduate-level teaching and research, she was
Project Director
at National Evaluation
Systems
(now a part of Pearson),
the leading developer of customized teacher
certification tests.
Before launching her consulting firm, in which she works
mostly with K-12 teachers, public school systems, and
related educational organizations, she worked for the
Georgia Dept. of Education where she was responsible for
several testing programs.
Her work with Better Testing & Evaluations focuses on
producing information that is useful to all
constituencies and developing data-driven blueprints for
strengthening programs.
Offered
(Two Rotations of the Same Content - Do not register for
both):
-
Monday, June 15, 2:30 – 4:00 PM
-
Tuesday, June 16, 2:30 – 4:00 PM
Offering 44: Logic Models as a Platform for Program
Evaluation Planning, Implementation, and Use of Findings
Level: All
Description: Practitioners use logic models to
describe important components of a program; make visible
a theory of change; and link activities to intended
outcomes. For the purposes of evaluation practice, a
well-constructed logic model provides a program-specific
foundation for identifying evaluation questions;
prioritizing data needs; and translating findings into
recommendations for ongoing program improvement. Aimed
directly at improving the utility of logic models and
quality of evaluation practice in your setting, the
workshop addresses 2 questions:
(1) What are the hallmarks of a well-constructed,
scientifically-sound and useful logic model?
(2) How do we maximize the use of logic models for program
evaluation planning, implementation and use of
findings?
Workshop Objectives:
-
Demystify and define the logic model as a starting
point for everyday evaluation practice
-
Identify the hallmarks of a well-constructed,
scientifically-sound logic model
-
Demonstrate the use of logic models to identify and
prioritize evaluation questions and data needs
-
Examine the use of logic models to identify
opportunities/options for demonstrating
accountability for scarce resources
-
Demonstrate use of a logic model to guide
preparation of findings/recommendations aimed at
ongoing program improvement
Audience:
Attendees working in any context who are new to logic
modeling.
Goldie MacDonald
is a Health Scientist in the Coordinating Office for
Global Health (COGH) at the Centers for Disease Control
and Prevention (CDC). She provides technical expertise
and practice wisdom in the areas of program evaluation
planning, implementation, and the use of findings to
inform ongoing program improvement. Much of her work
focuses on identifying appropriate strategies to
document program implementation and progress toward
intended outcomes in an effort to demonstrate
accountability for resources in public health contexts.
She offers practical guidance on participatory
approaches to program evaluation, resource-efficient
strategies for data collection, and the value of logic
models as a necessary platform for program evaluation.
She is lead author of “Introduction to Program
Evaluation for Comprehensive Tobacco Control Programs.”
For their work on this resource, the authors received
the Alva and Gunnar Myrdal Award for Government from AEA
in November 2002.
Offered (Two Rotations of the Same Content - Do not
register for both):
-
Tuesday, June 16, 9:25 – 12:45 (20 minute break
within)
-
Wednesday,
June 17, 9:25 – 12:45 (20 minute break within)
Offering 45: Methods for Analyzing Change Over Time
Level: Intermediate
Description:
We will
focus on a variety of methods for analyzing change in
outcomes over time, including the traditional fixed
effects methods of pre/posttest ANCOVA and Repeated
Measures ANOVA; the slopes-as-outcomes individual
regression analysis approach; and multilevel modeling
and random coefficients models. The purpose of the
workshop is to explore the conceptual underpinnings of
these different approaches to assessing change, and to
compare the kinds of statistical information one is able
to glean from these types of analyses when addressing
questions of change. We will discuss what it means to
measure change, how each method attacks that task, and
how to determine which measure to use in a given
situation since each method has its strengths and
weaknesses with respect to its conceptual approach,
parameter estimation, precision of estimates and
handling missing data. Due to the nature of the topic,
the majority of the workshop will involve presentations.
Conceptual information, statistical output and graphs
will be shared in a give-and-take format, where
participants bring their own questions and concerns
about analyzing change over time. Demonstration of how
to set up longitudinal data for the different analytical
methods will be included as well as interpreting
statistical output.
Audience:
Attendees
with a good understanding of General Linear Models
(including the ANOVA family and MRC) and some basic
experience with longitudinal analysis.
Katherine McKnight,
has taught statistics workshops at AEA's annual
conferences for many years and began with the Institute
in 2007. An outstanding facilitator, she is known for
making difficult concepts accessible and interesting.
Offered:
Offering 46: Public Health Evaluation: Getting to the Right
Questions
Level: Advanced beginner to Intermediate
Description:
In
1999, the Centers for Disease Control and Prevention
published the provide public health professionals with a
common evaluation frame of reference. Beyond this basic
framework, however, there are nuances and complexities to
planning and implementing evaluations in public health
settings. An important skill of the public health evaluator
is to work with stakeholders who may have an enormous range
of potential evaluation questions to arrive at a focused set
of evaluation questions that are most likely to provide
useful, actionable results for public health. The workshop
will employ real public health examples, a role play
demonstration and small group discussion to examine how to
explore and then narrow the scope of possible evaluation
questions to "get to the right questions" for a variety of
evaluation contexts. The class will focus on strategies to
work with stakeholders to identify what types of evidence
will have credibility while taking into consideration issues
such as politics, accountability, and rotating personnel.
Audience: Attendees working in any context and
familiar with
evaluation planning and implementation frameworks, such as
the CDC Evaluation Framework.
Mary V Davis
is Director of Evaluation Services at the North Carolina
Institute for Public Health and Adjunct Faculty in the
University of North Carolina School of Public Health where
she teaches several advanced evaluation courses.
Diane Dunet
is Team Lead of the Evaluation and Program Effectiveness
Team, Applied Research and Evaluation Branch in CDC's
Division of Heart Disease and Stroke Prevention, where she
conducts and supervises public health evaluations.
Offered (Two
Rotations of the Same Content - Do not register for both):
SCHEDULE OVERVIEW
¨
WORKSHOP
¨
INDEX OF CONCURRENT SESSIONS
¨
KEYNOTES
¨
SESSIONS
|