Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Using Indicators to Unite Partners and Players in a Common Evaluation Enterprise: Examples From the Centers for Disease Control and Prevention (CDC)
Panel Session 935 to be held in the Granite Room Section B on Saturday, Nov 8, 4:00 PM to 5:30 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Thomas Chapel,  Centers for Disease Control and Prevention,  tchapel@cdc.gov
Abstract: Consensus of key participants regarding a program and its components is optimal, but conceptual consensus should be operationalized as indicators, and those indicators should be matched with appropriate data sources. In federal programs that are implemented by networks of grantees and frontline practitioners, the indicator process is a formidable one because evaluation skills and availability of data sources vary. The CDC programs on this panel use indicators as a tool for monitoring and illustrating grantee performance. Representatives will discuss their programs, involvement of their grantees and partners in developing evaluation approaches, and the perceived need for indicators. The process for developing and implementing indicators will be discussed as will decisions regarding where to impose uniformity or grant autonomy in indicators and data collection. Transferable lessons from CDC's experience will be identified.
Identifying Core Evaluation Indicators to Support Strategic Program Needs
Jan Jernigan,  Northrop Grumman Corporation,  jjernigan1@cdc.gov
Susan Ladd,  Centers for Disease Control and Prevention,  sladd@cdc.gov
Todd Rogers,  Public Health Institute,  txrogers@pacbell.net
Erika Fulmer,  RTI International,  fulmer@rti.org
Ron Todd,  Centers for Disease Control and Prevention,  rhtodd@cdc.gov
Dyann Matson-Koffman,  Centers for Disease Control and Prevention,  dmatsonkoffman@cdc.gov
Many public health programs use logic models and outcome evaluation indicators to meet accountability requirements and inform program improvement. CDC's Division for Heart Disease and Stroke Prevention (DHDSP) has designed logic models and identified indicators for progress monitoring of its national goals. In addition, the DHDSP used a process to select a set of core indicators that will guide both national and state efforts. The process balanced the Division's strategic needs with the realities of state-level programs that vary widely in capacity, resources, and context. We present a description of the purpose, process, and pitfalls of DHDSP core indicator identification. We review the conceptual challenges and practical barriers of core indicator and measurement development, dissemination and implementation, and discuss anticipated issues regarding data analysis and utilization. The indicator development process is an example of using evaluation indicators to support program goals and better link strategic planning to program evaluation.
Quantitative Indicators for Evaluating Selected Cooperative Agreement Applications for the Centers For Disease Control and Prevention's Division of Cancer Prevention and Control
Cindy Soloe,  RTI International,  csoloe@rti.org
Phyllis Rochester,  Centers for Disease Control and Prevention,  prochester@cdc.gov
Jamila Fonseka,  Centers for Disease Control and Prevention,  jfonseka@cdc.gov
Debra Holden,  RTI International,  dholden@rti.org
Sonya Green,  RTI International,  sgreen@rti.org
CDC and RTI International are developing an approach to standardize selection and evaluation of national organization partnerships funded through CDC's Division of Cancer Prevention and Control (DCPC). This project will result in quantitative measures and scales for evaluating cooperative agreement recipients both as applications are submitted initially and for subsequent years of funding. Cooperative agreements are important to DCPC because they are the major funding mechanisms at CDC. Drawing on the key stakeholder input from DCPC and currently funded partners, we developed draft indicators to assess applications and post-award performance. Indicators include: Quality of Proposed Work, Data-Driven Decision Making, Applicant Capacity, Return on Investment, Value to CDC, CDC Funding Opportunity, Announcement Requirements, and Plan for Sustainability. The immediate outcome of this effort will be a set of quantitative evaluation tools that can be applied by CDC to standardize its approach in funding and evaluating partnerships in cancer control. Ultimately, we hope this approach will make transparent and quantifiable CDC's expectations of cooperative agreement recipients.
Developing a Framework for Evaluating Comprehensive Cancer Control Programs Using Performance Measures
Phyllis Rochester,  Centers for Disease Control and Prevention,  pfr5@cdc.gov
Debra Holden,  RTI International,  debra@rti.org
Beginning in 1998, the Division of Cancer Prevention and Control (DCPC) of the Centers for Disease Control and Prevention (CDC) has provided funding for comprehensive cancer control (CCC) activities, funding 65 programs in 2008. In 2006, CDC and the Research Triangle Institute (RTI) initiated a process of evaluating CCC programs, developing an evaluation framework and associated performance measures. Working with funded programs, indicators that express outcomes across time for CCC cancer control activities have been identified and pictured graphically. An assessment tool for each performance measure has been developed. Aggregated information on data submitted by programs using this tool will be discussed, as well as how this data will guide the future refinement of performance measures for CDC. The intent for CDC is that this information will make transparent CDC's expectations of funded grant recipients as well as provide documentation on the important accomplishments of CCC.

 Return to Evaluation 2008

Add to Custom Program