2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Methods and Tools for Evaluating Clinical and Translational Science
Panel Session 779 to be held in Malibu on Friday, Nov 4, 4:30 PM to 6:00 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Arthur Blank, Albert Einstein College of Medicine, arthur.blank@einstein.yu.edu
Discussant(s):
Paul Mazmanian, Virginia Commonwealth University, pemazman@vcu.edu
Abstract: Clinical and translational science is at the forefront of biomedical research and practice in the 21st century. The NIH-funded Clinical and Translational Science Awards (CTSAs) are the largest initiative at NIH and the 55 center grant evaluation teams constitute a unique national field laboratory in the evaluation of biomedical research and practice. The five presentations in this panel address the use of: Microsoft Project 2010 for linking strategic goals to evaluation; social network analysis of annual online surveys of scientists; methods of tracking and measuring the impact of research training programs; Tylerian matrix analysis to augment traditional logic modeling with information about strategic goals and objectives and link them to methods; and return on investment approaches for assessing pilot and educational programs. The panel will present five different evaluation studies and discuss their implications both for the specific context of the CTSAs and for the field of evaluation generally.
Linking Strategic Goals to Evaluation Using Microsoft Project 2010
Lisle Hites, University of Alabama, Birmingham, lhites@uab.edu
Susan Lyons, University of Alabama, Birmingham, lyons@uab.edu
Molly Wasko, University of Alabama, Birmingham, mwasko@uab.edu
Strategic planning and programmatic evaluation are two essential components of a successful program. At the University of Alabama at Birmingham's (UAB) CTSA, the Center for Clinical and Translational Science (CCTS), programmatic evaluation efforts have centered on the strategic planning process to ensure that performance measures are grounded in strategic goals and assessed accordingly. However, given the enormous scope of work inherent in each CTSA (e.g. the integration of many different cores or programmatic components, the vast range of research targets stretching from pre-clinical research through the spectrum including T1, T2, T3 and T4 research), comprehensive evaluation and continuous improvement of so many activities becomes onerous. The UAB CCTS has elected to utilize Microsoft (MS) Project 2010 as an organizational management tool to facilitate this massive evaluation project, and the value-added of using a comprehensive project planning tool, along with successes and challenges will be discussed.
Using Survey-based Social Network Analysis to Establish an Evaluation Baseline and Detect Short-term Outcomes of a Clinical and Translational Science Center
Megan Haller, University of Illinois, Chicago, mhalle1@uic.edu
Eric Welch, University of Illinois, Chicago, ewwelch@uic.edu
Using data from an annual online survey of scientists at the University of Illinois at Chicago's Center for Clinical and Translational Science (CCTS) and a control group of comparable scientists, this paper will examine how collaborative network structure and resource exchange patterns vary between CCTS participants and non-participants and whether CCTS related institutions are associated with pattern variation. The survey captures ego-centric collaborative network structure both within and outside academe, duration and origin of relationship, resource and knowledge exchange, attitudes toward clinical and translational research, and a range of activities including grants, conferences, workshops, new manuscripts, clinical research initiatives, interaction with the public, and education and policy activity. Survey based ego-centric network analysis enables the establishment of a multidimensional baseline for analysis that captures early outcomes, enables attribution to program activities, and provides feedback to program managers.
Tracking for Translational: Novel Tools for Evaluating Translational Research Education Programs
Julie Rainwater, University of California, Davis, julie.rainwater@ucdmc.ucdavis.edu
Erin Griffin, University of California, Davis, erin.griffin@ucdmc.ucdavis.edu
Stuart Henderson, University of California, Davis, stuart.henderson@ucdmc.ucdavis.edu
The Clinical and Translational Science Awards (CTSA) incorporate innovative translational research training programs aimed at producing a diverse cadre of scientists who work collaboratively to rapidly translate biomedical research into clinical applications. Evaluation of these programs that emphasize team science, interdisciplinary research, and acceptance of a range of career trajectories challenge evaluators to develop outcome measures that go beyond simply counting traditional academic products, such as individual publications and grants. This presentation describes methods used at the UC Davis Clinical and Translational Science Center to track, analyze and visualize the value and quality of translational research training. Using informatics and evaluation expertise, we developed tools that track products in a way that captures the essential qualities of translational research, such as multidisciplinary collaboration and teamwork rather than individual success. A method for visualizing the collaboration networks of successful multidisciplinary teams with Collexis Research Profiles will be described.
Integrating the Logic Model and Tyler Matrix Approaches in Evaluating Translational Science
Babbi Winegarden, University of California, San Diego, bwinegarden@ucsd.edu
Angela Alexander, University of California, San Diego, a1alexander@ucsd.edu
Effective evaluation of components is a critical aspect of our 360 degree evaluation for our CTSA grant. In order to effectively evaluate the components, the UCSD CTRI uses a mixture of the Logic Model and the Tyler Model. When completing the Logic Model, we emphasize our inputs (resources) and define our outputs (outcomes) as primary (count data), secondary (improved knowledge, skills and/or abilities) and tertiary (change in patient outcomes or overall impact). The Tyler Model adds information about goals, objectives and methods that helps tie all of the pieces together. We have found that the Tyler Matrix method is a great complement to the Logic Model; together achieving what neither does alone. So far, our component directors have found this evaluation process to be effective. We will share our evaluation process, logic models, excel spreadsheet which combines the two approaches, feedback from directors, and our metrics process with the audience.
Reframing Analysis: Return on Investment Protocols for Clinical and Translational Science Programs
Kyle Grazier, University of Michigan, kgrazier@umich.edu
William Trochim, Cornell University, wmt1@cornell.edu
Limited ability and experience assessing the value of CTSA research funding on accelerating the translation of scientific knowledge is a generic issue faced by both individual CTSAs and by NIH. To address this issue, investigators from U of M, Weill Cornell, and OHSU examine the return on investment of two key CTSA programs: pilot grants and education & training. By carefully studying the economic and social inputs and outputs of these programs, this work produces investigator, program and institutional estimates of return on investment. We create detailed protocols for assessing the value of these two CTSA functions. These protocols have specific objectives, methods, descriptions of the data to be collected, and how data are to be filtered and analyzed. We will provide a model and specific protocols that all CTSAs could potentially use to assess the economic and social returns on NIH and institutional investments in critical activities.

 Return to Evaluation 2011

Add to Custom Program