Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Public Health Emergency Preparedness: Identifying and Interpreting Boundaries, Complexities, and Drivers to Establish an Evaluation Methodology
Panel Session 230 to be held in Suwannee 19 on Thursday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Disaster and Emergency Management Evaluation TIG
Chair(s):
Goldie MacDonald, Centers for Disease Control and Prevention, gim2@cdc.gov
Abstract: For better or worse, operations at every level of government are defined by certain regulatory, policy, or procedural guidelines. These guidelines, or drivers, provide important boundaries that inform the design of a program evaluation. Further complicating program evaluation planning, implementation, and use is the sheer volume and complexity of these drivers. This panel will focus on the real-world practices developed to establish measurement models in the context of public health emergency preparedness. Each of the presentations includes a process of identifying and interpreting critical information that informs evaluation design from the very beginning of the program itself. The processes discussed may be valuable in helping other evaluators move beyond the daunting, and often defeating, first steps of design in a systematic way.
International Pandemic Influenza Preparedness: Establishing Guiding Principles as a Foundation for Program Monitoring and Evaluation
Goldie MacDonald, Centers for Disease Control and Prevention, gim2@cdc.gov
Danyael Garcia, Centers for Disease Control and Prevention, avj5@cdc.gov
Michael St Louis, Centers for Disease Control and Prevention, mes2@cdc.gov
Ann Moen, Centers for Disease Control and Prevention, alc3@cdc.gov
According to the World Health Organization (WHO), the geographical spread of H5N1 influenza in animals in 2006 was the fastest and most extensive of any pathogenic avian influenza virus recorded to date. Scientists and public health officials agree that the spread of the H5N1 virus in birds and the occurrence of infections in humans have resulted in increased vulnerability to a global pandemic. In response, individual countries and international organizations continue to develop and implement strategies to forestall the onset of a pandemic. As this work unfolds, program monitoring and evaluation strategies continue to evolve. The presenters discuss identification and interpretation of critical information that informed evaluation design for one multi-site program. To this end, we highlight the development and use of Guiding Principles to address key issues of context relevant to program evaluation planning, implementation, and use of findings across 40 countries worldwide.
Using Preparedness Drivers to Frame Preparedness Evaluation
Julie Madden, Centers for Disease Control and Prevention, jmadden@cdc.gov
Diane Caves, Centers for Disease Control and Prevention, dcaves@cdc.gov
Stephanie Dopson, Centers for Disease Control and Prevention, sdopson@cdc.gov
LaBrina M Jones, Centers for Disease Control and Prevention, guh1@cdc.gov
Anita McLees, Centers for Disease Control and Prevention, zdu5@cdc.gov
Felicia Suit, Centers for Disease Control and Prevention, fsuit@cdc.gov
Todd P Talbert, Centers for Disease Control and Prevention, ttalbert@cdc.gov
David G Withum, Centers for Disease Control and Prevention, dgw2@cdc.gov
The Centers for Disease Control and Prevention (CDC) plays a key role in preparing our nation for public health threats that include natural, biological, chemical, radiological, and nuclear incidents. CDC uses annual Terrorism Preparedness and Emergency Response (TPER) funding from Congress to support a range of activities at CDC as well as at state and local levels to develop and maintain response capacities and capabilities. To present a comprehensive picture of preparedness program achievements and to demonstrate accountability for funds, CDC is developing a comprehensive, integrated approach to federal public health preparedness measurement. This approach requires the synthesis of multiple legislative mandates and procedural guidance, as well as consideration of numerous reporting and measurement mandates. This presentation highlights the critical drivers that informed the development of CDC's federal public health preparedness measurement model as well as the process utilized to identify those TPER-funded areas that are most important to measure.
Establishing a National Measurement System for Public Health Emergency Preparedness: Interpreting Context and Meeting Stakeholder Needs
Anita McLees, Centers for Disease Control and Prevention, zdu5@cdc.gov
Craig W Thomas, Centers for Disease Control and Prevention, cht2@cdc.gov
Karen Mumford, Centers for Disease Control and Prevention, eqh1@cdc.gov
Since 2002, CDC's Division of State and Local Readiness (DSLR) has awarded over $6 billion to 62 states, territories, and localities through the Public Health Emergency Preparedness (PHEP) Cooperative Agreement. Programmatic activities funded through this Cooperative Agreement operate within a highly complex system involving multiple players and the demonstration of numerous interdependent capabilities. Furthermore, these relationships and capabilities must be executed within the context of diverse policy mandates and federal initiatives. The development and implementation of a standardized national measurement system for PHEP required collaboration with numerous stakeholders and the implementation of systematic processes to interpret key program drivers and contextual factors that influence evaluation design, data collection, and use of findings. This presentation will highlight CDC's approach to measurement development and implementation for the PHEP Cooperative Agreement, including the unique challenges of developing and implementing a national-level performance measurement system that meets the information needs of multiple stakeholders.

 Return to Evaluation 2009

Add to Custom Program