|
Session Title: Evaluation Meets Management: What Every Evaluator Should Know About His or Her Role in Performance Measurement - Examples From the Centers for Disease Control and Prevention (CDC)
|
|
Panel Session 408 to be held in Mineral Hall Section E on Thursday, Nov 6, 4:30 PM to 6:00 PM
|
|
Sponsored by the Evaluation Managers and Supervisors TIG
|
| Chair(s): |
| Thomas Chapel,
Centers for Disease Control and Prevention,
tchapel@cdc.gov
|
| Abstract:
The increasing emphasis on accountability forces high-level decision makers to contemplate program performance in a disciplined way. However, in certain organizations, planners, budgeters, evaluators, and performance monitors work in isolation from one another, using approaches and terms so differently that opportunities to meld insights into a common approach for improving the organization are missed. This panel presents three CDC programs with strong evaluation efforts under way and where the prior thinking on program evaluation and logic modeling served the program well in addressing performance measurement mandates as they occurred. Program representatives will describe their situations, their prior evaluation approaches, the nature of the mandates to which their programs are subject, and how evaluation efforts and insights have helped meet those mandates. The payoffs of this closer association for both performance measurement and evaluation will be presented.
|
|
Indicator Development, Measurement, and Data Collection: Successes, Hazards, and Detours Experienced by the Prevention Research Centers
|
| Jo Anne Grunbaum,
Centers for Disease Control and Prevention,
jgrunbaum@cdc.gov
|
| Demia Wright,
Centers for Disease Control and Prevention,
amy7@cdc.gov
|
| Nicola Dawkins,
Macro International Inc,
nicola.u.dawkins@macrointernational.com
|
| Amee Bhalakia,
Macro International Inc,
amme.m.bhalakia@macrointernational.com
|
| Natalie Birnbaum,
Northrop Grumman Corporation,
nib6@cdc.gov
|
| Eduardo Simones,
Centers for Disease Control and Prevention,
ejs0@cdc.gov
|
|
The Prevention Research Centers (PRC) Program is a network of 33 academic research centers funded by the Centers for Disease Control and Prevention. Researchers collaborate with public health agencies and community members to conduct applied prevention research. A national evaluation strategy began in 2001 to address program accountability and produced a logic model, program indicators, and qualitative studies. The indicator data describe aspects of prevention research that can be tracked over time, such as the number of research and training projects occurring at the centers, PRCs' publications and presentations, community involvement in the research, and adoption of PRC-tested interventions by communities. This presentation describes the development of both the indicators and an Internet-based information system for data collection and the participatory processes used throughout development. Strategies that worked well in addition to the hazards and detours encountered along the way will be shared.
|
|
|
Performance Measures: An Important Tool when Conducting Multi-Site Evaluations
|
| Ann Ussery-Hall,
Centers for Disease Control and Prevention,
aau6@cdc.gov
|
|
The Steps Program is a multi-site, multi-level, multi-focus program. Through Steps, CDC funds 40 communities nation-wide to implement chronic disease prevention and health promotion activities. The Steps team developed a set of core performance measures to systematically measure program implementation and progress toward intended health outcomes throughout the five-year program. These standardized measures provide a cumulative, national view of the program. Because communities have their own evaluation plans and data collection methods, the performance measures are an important tool for providing unifying, cross-cutting information. Annually, the core performance measures provide Steps stakeholders with a picture of what is occurring nationally. This systematic approach to collecting data from communities has allowed CDC to create a national picture of the local activities. The utility of the core performance measures will continue as CDC considers next steps, including dissemination of materials and lessons learned, and replication of Steps components.
| |
|
Indicator Development to Tell Your Program's Story CDC's National Asthma Program
|
| Leslie Fierro,
Centers for Disease Control and Prevention,
let6@cdc.gov
|
| Elizabeth Herman,
Centers for Disease Control and Prevention,
ehh9@cdc.gov
|
|
Measurement of program 'indicators' that align with the logic underlying a program can help identify and communicate areas for improvement, document progress made toward intended outcomes, and demonstrate successes. Indicators developed in this way have the potential to 'fill in' the gaps left by performance measures typically produced for the purposes of GPRA and PART and may enable programs to tell their story more fully. This presentation will track the efforts of CDC's National Asthma Program in developing process and outcome indicators that reflect the story and logic of 35 state asthma programs. The collaborative process used to facilitate a shared vision of the program's purpose, to articulate evaluation questions, and to develop a process for measuring indicators will be described.
| |