Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Clinical and Translational Science Awards: Evaluation Approaches and Dimensions of Quality
Multipaper Session 308 to be held in Suwannee 11 on Thursday, Nov 12, 1:40 PM to 3:10 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Don Yarbrough, University of Iowa, d-yarbrough@uiowa.edu
Discussant(s):
Don Yarbrough, University of Iowa, d-yarbrough@uiowa.edu
Abstract: The NIH NCRR established the Clinical and Translational Science Awards in 2005 with the goal of creating a national consortium to transform clinical and translational research in the United States. The national consortium now consists of three cohorts totaling 38 academic health centers. Each participating institution, typically under the aegis of a new or existing "Institute for Clinical and Translational Science," has proposed and implemented a site-level "monitoring, tracking, and evaluation" component (EC). However, the individual institutes' evaluations vary widely in structure, purpose, and function. These papers provides a detailed look at site-level dimensions of difference, for example, 1) EC location (internal or external to the Institute), 2) administration and governance, 3) reporting mechanisms and structure, 4) funding and resources available 5) identified purposes 6) anticipated challenges and barriers, 7) general evaluation approaches and methods, and 8) definitions and dimensions of evaluation quality.
Clinical and Translational Science Awards (CTSA) Evaluators' Survey: An Overview
Cath Kane, Weill Cornell Clinical Translational Science Center, cmk42@cornell.edu
Sheila K Kessler, Northwestern University, s-kessler@northwestern.edu
Knut M Wittkowski, Rockefeller University, kmw@rockefeller.edu
At the 2008 CTSA evaluators meeting in Denver, several members gathered structured input from their colleagues towards the development of a basic CTSA Evaluators Survey. The goal of the survey was to investigate common trends in three critical areas: 1) the management of evaluation, 2) data sources and collection methods, and 3) analysis or evaluation activities currently being conducted. The survey was conducted in the Spring of 2009 with CTSA Evaluation staff intended as both the survey participants and primary audience for findings. The goal of this survey was to provide a comparative overview of all current CTSA evaluators, to allow each team to solicit best practices, and to allow the various CTSA evaluation teams to orient themselves relative to their peers. As such, a review of the survey results will act as a fitting introduction to the series of case studies gathered for this multipaper of CTSA Evaluators.
Evaluation in a Complex Multi-Component Initiative, University of Wisconsin's Institute for Clinical and Translational Research
D Paul Moberg, University of Wisconsin, dpmoberg@wisc.edu
Jan Hogle, University of Wisconsin, jhogle@wisc.edu
Jennifer Bufford, Marshfield Clinic Research Foundation, bufford.jennifer@mcrf.mfldclin.edu
Christina Spearman, University of Wisconsin, cspearman@wisc.edu
UW-ICTR's Evaluation Office provides program evaluation support to nearly 30 components from throughout the University and Marshfield Clinic with 2.5 FTE (4 people) located in both places. Organizationally, Evaluation is part of Administration, integrating cross-component evaluation, priority setting, reporting, accountability, and quality improvement, as well as providing informal "ethnographic" access to the Institute's daily activities. Building internal rapport is key to leveraging interest in program evaluation in an environment unfamiliar with the concepts. We additionally focus on cross-component evaluation addressing ICTR goals and specific aims, and participate in national consortium activities. Our role involves helping components to better articulate goals and objectives; assisting with annual report narratives; conducting key informant interviews and investigator surveys; and compiling case studies. Our Evaluation Working Group represents all cores in a utilization-focused, participatory approach to monitoring and evaluation. The presentation will further discuss our approach, findings, and possible indicators of "evaluation quality".
Atlanta Clinical Translational Science Institute: An Evaluation Framework
Iris Smith, Emory University, ismith@sph.emory.edu
Tabia Henry Akintobi, Morehouse School of Medicine, takintobi@msm.edu
Brenda Hayes, Morehouse School of Medicine, bhayes@msm.edu
Andrew West, Emory University, awest2@emory.edu
Cam Escoffery, Emory University, cescoff@sph.emory.edu
The Atlanta Clinical Translational Science Institute (ACTSI) represents a multi-institutional partnership between Emory University, Morehouse School of Medicine, Georgia Institute of Technology and several community organizations. The evaluation function is highly collaborative and organized around an evaluation team consisting of evaluators from each of the primary evaluation user groups, i.e. the collaborating academic institutions, the Institute's Bioinformatics key function program, ACTSI administration and a part time research assistant. The evaluation team meets bi-weekly to plan evaluation activities, review data and prepare evaluation reports for ACTSI leadership. All decisions are made through consensus. This collaborative approach to evaluation has been beneficial in a number of ways by: facilitating evaluation "buy-in;" providing a vehicle for rapid communication of evaluation plans and findings; providing centralized coordination of evaluation activities across the ACTSI key functions; strengthening existing partnerships; and fostering the development of additional collaborative activities among team members.
Duke Translational Medicine Institute (DTMI) Evaluation
Rebbecca Moen, Duke Translational Medicine Institute, rebbecca.moen@duke.edu
Melissa Chapman, University of Iowa, melissa-chapman@uiowa.edu
Vernita Morgan, University of Iowa, vernita-morgan@uiowa.edu
The DTMI evaluation tracks for continuous quality improvement and is focused on measuring how each component contributes to the goals of the national CTSA Consortium and the DTMI. Using a decentralized model, the DTMI component leadership and staff monitor metrics for their respective components and provide reports to DTMI Administration. Our evaluation efforts are fully integrated within our administrative leadership structure and strategic planning processes. We have developed a tool called a "Zerhouni-gram", derived from instruments used to develop the NIH Roadmap, which each component describes its strategic goals for the following year, including: 1) activities that "definitely can get done," 2) activities that "should get done," 3) stretch goals, and 4) national CTSA Consortium goals. Each component completes worksheets on an annual basis to describe its progress toward these strategic activities, its individual component goals, and if applicable, its progress on issues that were identified in the previous review by our External Advisory Committee.
ICTS Evaluation and Metaevaluation: The University of Iowa Approach
Emily Lai, University of Iowa, emily-lai@uiowa.edu
Antionette Stroter, University of Iowa, a-stroter@uiowa.edu
Douglas Grane, University of Iowa, douglas.grane@gmail.com
The CTSA Evaluation structures and functions at the University of Iowa are distributed across all key functions. The responsibility for conducting on-going and annual evaluations resides with the key function directors. The Center for Evaluation and Assessment, an independent 3rd party, regents-approved evaluation and assessment center since 1992, is responsible for formative and summative metaevaluation. The Director of the CEA consults with the ICTS Executive Committee and Key Function members on a bi-weekly basis. Staff members at the CEA are available for on-going formative consultation on evaluation activities and for evaluating specific sub-components that fall outside the reach of individual key functions. In addition, the CEA works with the Informatics Key Function to review institute-wide information needs for the overall governance function. Beginning in Year 2, the CEA assumed responsibility for an overall metaevaluation of all Key Function Monitoring, Tracking and Evaluation components and products.

 Return to Evaluation 2009

Add to Custom Program