Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Evaluation Planning Incorporating Context (EPIC): A Model and Case Examples of Practice
Panel Session 546 to be held in Room 113 in the Convention Center on Friday, Nov 7, 9:15 AM to 10:45 AM
Sponsored by the Theories of Evaluation TIG
Chair(s):
Debra Holden,  RTI International,  debra@rti.org
Abstract: This session will present an overview of a Sage book (release date: 10/08), entitled A Practical Guide to Program Evaluation Planning. The book provides a step-by-step process to guide evaluators as they begin developing a program evaluation with a number of settings. Our five step conceptual framework, called Evaluation Planning Incorporating Context (EPIC), includes 1) assessing context (e.g., stating the purpose of the evaluation); 2) gathering reconnaissance (e.g., determining the evaluation uses); 3) engaging stakeholders (e.g., ensuring stakeholders' buy-in); 4) describing the program (e.g., stating the theoretical underpinnings of the program); and 5) focusing the evaluation (e.g., assessing feasibility of proposed measures). We will first provide an overview of our model, including an explanation of each step and corresponding issues to anticipate for each, and then invited presenters, who are authors of chapters in the book, will present case examples of their planning process for evaluations conducted in differing contexts.
Evaluation Planning Incorporating Context (EPIC) Model Overview
Debra Holden,  RTI International,  debra@rti.org
Marc A Zimmerman,  University of Michigan,  marcz@umich.edu
In this presentation (and book), we introduce an evaluation planning process model we call the Evaluation Planning Incorporating Context (EPIC) model. We define evaluation planning as the initial or background processes that go into the final design and implementation of a program evaluation. We describe five general steps in this process assess context, gather reconnaissance, engage stakeholders, describe the program, and focus the evaluation that provide a guideline for evaluators as they develop an evaluation plan. The EPIC model provides a heuristic for evaluation planning rather than a specified set of steps that are required for all evaluations. Some parts of the model may be more or less applicable depending on such issues as the type of evaluation, the setting of the evaluation, the outcomes of interest, and the sponsor's interests. Thus, the EPIC model can be used as a kind of instruction guide to prepare for a program evaluation.
Planning for an Education Evaluation
Julie Marshall,  University of Colorado Denver,  julie.marshall@uchsc.edu
In this presentation, contextual factors identified in planning the evaluation of a school-based nutrition curriculum in a rural, low-income community will be described. The curriculum delivery included fun, hands-on food preparation, cooperative learning, and activities that were tied to content standards in math, literacy and science. The evaluation focused on understanding long term curriculum effectiveness and factors that influence curriculum adoption and delivery. Evaluation planning considered local and state stakeholders; the stress surrounding high-stakes testing and the burden placed on schools for health related activities. Including teachers as part of the evaluation team was critical for informing the evaluation questions, for informing the context within which teachers and students operate that may modify curriculum delivery and impact, and in developing evaluation tools. Strategies for planning meaningful evaluations in a school setting and lessons learned will be highlighted.
Evaluation Planning for a Service Agency Program
Mari Millery,  Columbia University,  mm994@columbia.edu
This case study represents an evaluation planning process in a service program context. The EPIC model is applied to describe the planning for a study at the Leukemia & Lymphoma Society's Information Resource Center (LLS IRC), which responds to nearly 80,000 annual telephone inquiries from cancer patients and their family members. The study focused on a patient navigation intervention consisting of follow-up calls LLS IRC made to its clients. The author, who was contracted as an external evaluator, worked closely with LLS IRC managers and staff throughout the 3-month planning process which resulted in a fairly rigorous study design. The presentation will describe each planning step and discuss lessons learned. Issues of particular importance to service programs will be highlighted, including the complexity of the context, importance of stakeholders, process vs. outcome evaluation, and use of tools, conceptual frameworks, and evaluation research concepts while working with service program stakeholders.
Evaluation Planning for a Community-Based Program
Thomas Reischl,  University of Michigan,  reischl@umich.edu
Susan P Franzen,  University of Michigan,  sfranzen@umich.edu
Planning an evaluation of a new community-based program required successful partnerships with the project's coordinating agency and other community-based organizations. In the presentation we describe these relationships and the role the evaluators played in developing and evaluating the new program. We adopted a "responsive predisposition" (Stake, 2004) to focus on key issues, problems, and concerns experienced by the program's stakeholders in plan development. We also adopted principles that engaged key stakeholders in the evaluation planning and in the implementation of the evaluation study. We describe the development of the evaluation plan for a new telephone information and referral service focused on serving African American families and reducing infant mortality among African American mothers. Finally, we discuss the utility of using an Evaluation Planning Matrix to help focus the evaluation. The evaluation plan focused on process evaluation goals and attempted an outcome evaluation study using baseline data from a previous study. Stake, R. E. (2004). Standards-based and responsive evaluation. Thousand Oaks, CA: Sage.
Planning for a Media Evaluation: Case Example of the National Truth Campaign
W Douglas Evans,  The George Washington University,  sphwde@gwumc.edu
Media evaluation is an overarching subject area that includes the study of marketing campaigns intended to promote or change consumer behavior, as well as assessments of educational and entertainment media and the effects of news media on public discourse and policy. In this presentation, we describe evaluation planning strategies and research methods for health communication and marketing campaigns designed to affect consumer health behavior. Media evaluation is distinct from other forms of program evaluation. It focuses on media effects on healthy behaviors or avoidance of unhealthy behaviors, as opposed to broad evaluation strategies that cross-cut multiple venues and approaches. Media evaluations measure four key process and outcome dimensions of campaign effectiveness: 1) exposure and recall, 2) message reactions and receptivity, 3) behavioral determinants, and 4) behavioral outcomes. After describing media evaluation methods, we describe the truth campaign evaluation, the largest antitobacco media campaign conducted in the United States.

 Return to Evaluation 2008

Add to Custom Program