Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Using Logic Models to Guide Program Implementation and Evaluation: Examples From the Centers for Disease Control and Prevention (CDC)
Panel Session 265 to be held in Centennial Section H on Thursday, Nov 6, 10:55 AM to 12:25 PM
Sponsored by the Program Theory and Theory-driven Evaluation TIG
Chair(s):
Thomas Chapel,  Centers for Disease Control and Prevention,  tchapel@cdc.gov
Abstract: The starting point of meaningful evaluation is a strong program description. Logic models have been the chosen tool for depicting programs in succinct, graphic formats. Logic models, once elaborated serve as program theories that can guide program planning, implementation, and program evaluation. This presentation discusses selected CDC programs, all of have used their program theories to guide diverse aspects of program implementation and program evaluation. In all cases, program staff present their initial thinking, how they identified and selected their program theory or frameworks, how the underlying logic of their programs has been influential in making choices in implementation and evaluation.
Using a Logic Model to Guide Evaluation Capacity Building
Maureen Wilce,  Centers for Disease Control and Prevention,  mwilce@cdc.gov
Heather Joseph,  Centers for Disease Control and Prevention,  hbj7@cdc.gov
Logic models are widely used tools in planning and evaluation. Typically, logic models are created to help guide program evaluations, but may also be used as planning tools to reflect program capacity building processes. In 2002, the Division of TB Elimination and its partners decided to expand our evaluation efforts. From the literature and experience, we recognized the need to develop logic models for common TB program components. As we considered our strategy for evaluation capacity building (ECB), we realized that a different kind of logic model would be useful. This logic model delineated essential activities of the ECB process, identified interrelationships of our activities, and defined expected outcomes. It enabled us to focus our efforts, articulate the concept of ECB to external partners, and recruit participants to assist us in defined tasks. The logic model effectively guided the ECB process and provided a mechanism to track successes.
Logic Models as Multi-Faceted Resources to Advance Evaluation Policy
Marlene Glassman,  Centers for Disease Control and Prevention,  mglassman@cdc.gov
By requiring the health departments and community-based organizations (CBOs) it funds for HIV prevention to participate in its national monitoring and evaluation system, CDC's Division of HIV/AIDS Prevention (DHAP) puts forth policy -- that program monitoring and evaluation (M&E) should take place as elements of a comprehensive HIV prevention program. The three major components of M&E in HIV prevention are process monitoring, process evaluation, and outcome monitoring. Logic models can inform M&E as well as the design and implementation of evidence-based interventions. More specifically, logic models can help identify variables for process monitoring, process evaluation, and outcome monitoring and the development of SMART (specific, measurable, appropriate, realistic, and time-phased) objectives for each of these M&E activities. By tracking progress in the achievement of objectives grounded in behavior change and implementation logic, CBOs advance DHAP policy and the likelihood of intervention success.
Evaluability Assessment: Case Management for Completion of TB Therapy
Judy Gibson,  Centers for Disease Control and Prevention,  jsd0@cdc.gov
Maureen Wilce,  Centers for Disease Control and Prevention,  mwilce@cdc.gov
Tuberculosis (TB) treatment requires long term but finite adherence by patient and provider. Case management for treatment completion is done by nurses, but the value added by the nurse not well defined. Patient outcomes can be measured, but there is no system to link the nurses' performance with these results. The Division of TB Elimination (DTBE) recognized the need to clarify goals and objectives and develop a theory of how nurses' performance affects patient outcomes. DTBE undertook an evaluability assessment (EA) with partners in state and local TB programs. The assessment began by engaging stakeholders on an EA team, reviewing literature, and exploring practice. The team collaboratively developed a logic model and used it to develop evaluation tools and establish a performance measurement system. Initial applications of this system have shown the value of the EA for improving public health programs by enhancing capacity for self-evaluation.

 Return to Evaluation 2008

Add to Custom Program