|
Designing Monitoring and Evaluation Systems That Make Good Use of Minimum Packages as Indicators of Quality Health Programming
|
| Presenter(s):
|
| Ian Membe, Centers for Disease Control and Prevention, membei@zm.cdc.gov
|
| Ian Milimo, Pepfar Coordination Office, Zambia, milimoi@state.gov
|
| Lungowe Mwenda, Centers for Disease Control and Prevention, mwapelal@zm.cdc.gov
|
| Abstract:
Minimum packages are not new. They have been used in various contexts as indices to ensure that programs provide services that can be legitimately counted as support or beneficiary 'reach'. In global HIV/AIDS programs they have provided a yardstick by which programs can be evaluated as being successful. There is a concern as to how these measures relate to a program's overall monitoring and evaluation (M&E) system and its indicators. Do we include them as separate measures or part of the M&E system? What about partners who cannot provide the whole package?
Monitoring and Evaluation systems need to be setup on a needs basis so that they respond to specific felt needs of target beneficiaries. Evaluations then respond by measuring the program's response to those needs as well as the causal linkage between met needs and program impacts. The minimum package then becomes the recipient's minimum package.
|
|
Health Care Public Reporting: A High Stakes Evaluative Tool
|
| Presenter(s):
|
| Sasigant O'Neil, Mathematica Policy Research, so'neil@mathematica-mpr.com
|
| John Schurrer, Mathematica Policy Research, jschurrer@mathematica-mpr.com
|
| Christy Olenik, National Quality Forum, colenik@qualityforum.org
|
| Abstract:
Publicly reporting of health care quality performance measures has become a high stakes game. However, the diversity in purposes, audiences, and data sources among public reporting initiatives can make it difficult to identify opportunities for coordination in pursuit of a national agenda for assessing, evaluating, and promoting health care quality improvement. To help identify such opportunities, we conducted an environmental scan of public reporting initiatives and their measures. Initiative characteristics included audience; geographic level; report dates; payer type; sponsor; organization type; and when public reporting began. Measures were mapped to a framework of national priorities and goals, as well as, other conceptual areas of importance, such as cost and health condition. Measures characteristics such as data source, endorsement by the National Quality Forum, target population, and unit of analysis were also collected. A group of national leaders used the scan results to begin identifying a community dashboard of standardized measures.
|
|
Using a Program Evaluation Approach to Ensure Excellence in Physician Practice Assessments
|
| Presenter(s):
|
| Wendy Yen, College of Physicians and Surgeons of Ontario, wyen@cpso.on.ca
|
| Rhoda Reardon, College of Physicians and Surgeons of Ontario, rreardon@cpso.on.ca
|
| Bill McCauley, College of Physicians and Surgeons of Ontario, bmccauley@cpso.on.ca
|
| Dan Faulkner, College of Physicians and Surgeons of Ontario, dfaulkner@cpso.on.ca
|
| Wade Hillier, College of Physicians and Surgeons of Ontario, whillier@cpso.on.ca
|
| Abstract:
A key function of medical regulatory authorities is to ensure public/patient safety through development and implementation of programs to assess physician performance and competence. We describe the development of a conceptual model for physician assessment which represents a shift from a singular focus on 'valid and reliable' assessment tools to a framework that places equal importance on all 'components' of an assessment program. That is, while tool development is undoubtedly a key component in quality assessments, the new model places equal emphasis on other components of the assessment process that may also affect outcomes (e.g. assessor training, use of assessment reports by decision bodies). The movement from a measurement model to a program evaluation model represents a paradigmatic shift from a positivistic framework to one that recognizes the inherent complexities in health science research and illustrates how transparency, utility and mixed-methods approaches can also be used to achieve desired outcomes.
|
|
Evaluating the Quality of Quality Improvement Training in Healthcare
|
| Presenter(s):
|
| Daniel McLinden, Cincinnati Children's Hospital Medical Center, daniel.mclinden@cchmc.org
|
| Stacey Farber, Cincinnati Children's Hospital Medical Center, stacey.farber@cchmc.org
|
| Martin Charns, Boston University, mcharns@bu.edu
|
| Carol VanDeusen, United States Department of Veterans Affairs, carol.vandeusenlukas@va.gov
|
| Abstract:
Quality Improvement (QI)in healthcare is an increasingly important approach to improving health outcomes, improving system performance and improving safety for patients. Effectively implementing QI methods requires knowledge of methods for the design and execution of QI projects. Given that this capability is not yet widespread in healthcare, training programs have emerged to develop these skills in the healthcare workforce. In spite of the growth of training programs, limited evidence exists about the merit and worth of these programs. We report here on a multi-year, multi-method evaluation of a QI training program at a large Midwestern academic medical center. Our methodology will demonstrate an approach to organizing a large scale training evaluation. Our results will provide best available evidence for features of the intervention, outcomes and the contextual features that enhance or limit efficacy.
|
| | | |