|
Session Title: Evaluation of the Clinical and Translational Science Awards (CTSA) Programs: A Focus on Quality
|
|
Panel Session 133 to be held in REPUBLIC B on Wednesday, Nov 10, 4:30 PM to 6:00 PM
|
|
Sponsored by the Health Evaluation TIG
|
| Chair(s): |
| D Paul Moberg, University of Wisconsin, dpmoberg@wisc.edu
|
| Discussant(s):
|
| William M Trochim, Cornell University, wmt1@cornell.edu
|
| Abstract:
This panel addresses evaluation quality in a complex organizational environment implementing health research infrastructure interventions -- specifically, the 46 academic institutions receiving Clinical and Translational Science Awards (CTSAs). Program evaluation happens at multiple levels as required by funders. CTSA evaluators with broad disciplinary backgrounds apply a range of approaches and mechanisms to evaluating these interventions. The settings and context raise many questions regarding the very concept/definition of evaluation, necessary level of rigor, range of purposes, and level of independence versus integration, leading us to a constant need to “evaluate our evaluation”. Our presentations explore: 1) applying evaluation standards to improve programs; 2) integrating external evaluative input into quality improvement; 3) using qualitative data to enhance evaluation utility; 4) linking program needs to evaluation quality; and 5) examining the utility of publication data as a key metric measuring the quality of biomedical research in the context of the CTSA program.
|
|
Using the Program Evaluation Standards, Third Edition, to Investigate and Improve CTSA Program Evaluation
|
| Emily Lai, University of Iowa, emily-lai@uiowa.edu
|
| Melissa Chapman, University of Iowa, melissa-chapman@uiowa.edu
|
| Donald Yarbrough, University of Iowa, d-yarbrough@uiowa.edu
|
|
The revised Program Evaluation Standards (3rd edition, SAGE, 2010) provide explanations, rationales, implementation guidelines, and case applications to improve evaluation quality along five key dimensions: utility, feasibility, propriety, accuracy, and evaluation accountability (new to the 3rd edition). This paper will introduce real-life evaluation problems and dilemmas in Clinical and Translational Science (CTS) program evaluations and illustrate how the 3rd edition standards can guide reflective practice to define and improve evaluation quality. For example, what can be done when stakeholders in an evaluation are too busy or disinterested to be involved and contribute? How can evaluation quality be maximized in the face of too few resources? How can balance among different dimensions of evaluation quality be achieved when they require different actions and the same resources? This paper also illustrates how formative and summative, internal and external metaevaluative approaches can be used reflectively to improve CTS program evaluation quality and accountability.
|
|
|
External Advisory Committee Recommendations Incorporated Into Utilization-Focused Evaluation
|
| Janice A Hogle, University of Wisconsin, Madison, jhogle@wisc.edu
|
| D Paul Moberg, University of Wisconsin, Madison, dpmoberg@wisc.edu
|
| Christina Spearman, University of Wisconsin, Madison, cspearman@wisc.edu
|
| Jennifer L Bufford, Marshfield Clinic, bufford.jennifer@mcrf.mfldclin.edu
|
|
The evaluation and tracking component of the University of Wisconsin’s Institute for Clinical and Translational Research juggles program evaluation on multiple levels. Internal evaluators are responsible for assisting 30 components with identifying metrics, managing, analyzing and interpreting data; collaborating with national Consortium evaluators; and assisting with quality improvement and strategic direction. This presentation examines one slice of the effort – preparing for, interpreting and implementing annual recommendations from our External Advisory Committee (EAC). EACs review implementation, including evaluation process and results. This paper addresses how we use this process and the recommendations to improve our evaluation and enhance achievement of Institute goals and objectives. Both the process and recommendations from the EAC have had an impact on quality improvement for both our evaluation and our program. Discussion focuses particularly on our medium term outcomes and how we’ve arrived at “appropriate” metrics, using Patton’s emerging model of Developmental Evaluation.
| |
|
In-depth Interviews With CTSA Investigators: Contributions to Quality Evaluation
|
| Christine Weston, Johns Hopkins University, cweston@jhsph.edu
|
| Jodi B Segal, Johns Hopkins University, jsegal@jhmi.edu
|
|
As part of its evaluation plan, the Johns Hopkins Institute for Clinical and Translational Research (ICTR) conducted a series of in-depth case study interviews with a group of diverse clinical and translational investigators. This presentation will highlight 1) the advantages and challenges of using a qualitative approach to evaluation in the context of the CTSA, 2) the extent to which the findings were valuable and useful for decision-making and strategic planning by our ICTR leadership, and 3) the challenge of applying the insights gained through the interviews to improving the overall quality of our evaluation.
| |
|
Linking Clinical and Translational Science Evaluation Purpose to Quality
|
| Christine Minja-Trupin, Meharry Medical College, ctrupin@mmc.edu
|
|
Evaluators’ role to provide evidence of the extent to which CTS programs are achieving stated objectives presents important challenges: 1) establishing evaluation systems for massive data collection and 2) an existing evaluation system in which most of data collected are never analyzed. Only some of the data analyzed are reported and very few evaluation reports are used. The need for efficient and high quality data has never been greater. In response, the Meharry Translational Research Center (MeTRC), awarded November 2009, will use program needs to accomplish its goal to guide evaluation purpose. At this early stage for example, refining the program plan is a priority. Every step of the implementation is a hypothesis. The role of evaluation is to identify, test and clarify assumptions underlying program logic. The extent to which the evaluation improves the program will provide the basis for assessing the quality of evaluation.
| |
|
“And So It Is Written”: Publication Data as a Measure of Quality and the Quality of Publication Data as an Evaluative Tool
|
| Cathleen Kane, Cornell University, cmk42@cornell.edu
|
|
Evaluating quality is a challenge. For CTSAs this challenge is compounded by lengthy time horizons (“17 years for only 14% of new scientific discoveries to enter clinical practice”) and an ambitious mission (“moving research from bench to bedside”). Biomedical research is a complex system that involves multiple feedback loops, many participants, and no clear starting or ending point. In a system where even a failed experiment informs and improves the next, how can evaluators measure the quality of goals and objectives that at best will take a generation to transpire? Publication data is an attractive option because it can speak to quality (journal impact factor), quantity (publication and citation rates) and collaboration (co-authorship). In this presentation we will outline and enumerate the many evaluative advantages and options for publication data with a focus on quality, followed by a discussion investigating the potential risks of overreliance on such a valuable indicator.
| |