|
Session Title: Not a Poor Cousin to Evaluation: The Critical Role of Performance Monitoring for Program Improvement
|
|
Panel Session 504 to be held in Sebastian Section I1 on Friday, Nov 13, 10:55 AM to 11:40 AM
|
|
Sponsored by the Government Evaluation TIG
|
| Chair(s): |
| Elizabeth Harris, Evaluation, Management & Training Associates Inc, eharris@emt.org
|
| Abstract:
The proposed panel will explicate the role and value of performance monitoring as a complement to traditional evaluation. More specifically it will 1) clarify the distinctions between, and inter-relation of, performance monitoring and evaluation, and 2) demonstrate how performance monitoring complements traditional evaluation by linking evaluative data more closely to policy and management decisions. These conclusions are grounded in the design, implementation, information products, and system improvements demonstrated in the comprehensive performance monitoring system developed to support quality improvement, effectiveness assessment and decision making for the Centers for Disease Control and Prevention's recently consolidated National Consumer Response Services Center (CDC-INFO). Two presentations will be made, by a member of the CDC-INFO management team, and a representative from EMT Associates, Inc., the external contractor for the performance monitoring and evaluation system.
|
|
Development of the Centers for Disease Control and Prevention's (CDC's)-INFO's Performance Monitoring System
|
| Elizabeth Harris, Evaluation, Management & Training Associates Inc, eharris@emt.org
|
|
The introductory presentation will provide an overview of the CDC-INFO performance monitoring system, and the systematic and collaborative process through which it was developed. EMT was contracted to conduct a seven-year "performance evaluation" in 2005. Influences that shaped the system come from the policy world, CDC's evaluation framework and the overall goal of meeting CDC's information needs. The development of the system necessitated clarification of both evaluation and performance monitoring components. Explicit examples from the project provide a meaningful grounding of more general distinctions. Examples include, a)the process of determining what factors should go into the contact center contractor award fee and therefore performance monitoring, b) the phasing of development on initial quality improvement and subsequent incorporation of different stakeholder products that adapt real time data to multiple decision and information needs (e.g., training and coaching, consumer need information for CDC topical programs), and c) incorporation of outcome and effectiveness information to improve system outreach and content Dr. Harris will present functional definitions for terminology used and present a perspective on the differences between performance monitoring and evaluation. The purpose of performance monitoring from the perspective of the CDC evaluation will be discussed, along with lessons learned and implications for the field of evaluation as a new federal fiscal funding year begins under a new administration.
|
|
|
Useful Information for Decision Making From the Centers for Disease Control and Prevention's (CDC's)-INFO Performance Monitoring System
|
| Paul Abamonte, Centers for Disease Control and Prevention, paa6@cdc.gov
|
|
Mr. Abamonte is CDC's Evaluation/Audience Research officer for CDC-INFO (CDC's National Consumer Response Services Center) and an end-user of performance monitoring data and information generated by the system established by EMT. The "Performance Evaluation" contract was designed with the intent to "provide ongoing independent, systematic, and continuous evaluation" and "utilize industry best practices to ensure conformance to established performance standards and to assist management in key decision-making efforts" (p. 5 of the original RFQ). Mr. Abamonte will show how the information generated contributed to a feedback loop and resulted in demonstrable, measurable quality improvement. Examples of when evaluation techniques were brought in to answer key questions (and the appropriate context) will be presented by way of illustration. The balance between competing information needs from the government perspective will also be explored in the context of increasing demands and decreasing funding. Implications for how evaluators can assist will be a focus.
| |