Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Evaluating Translational Research: Challenges for Evaluation Policy and Practice
Panel Session 866 to be held in the Granite Room Section B on Saturday, Nov 8, 10:45 AM to 12:15 PM
Sponsored by the Health Evaluation TIG
Chair(s):
William Trochim,  Cornell University,  wmt1@cornell.edu
Discussant(s):
William Trochim,  Cornell University,  wmt1@cornell.edu
Abstract: Translational research is a newly emerging discipline that attempts to integrate scientific research and communities of practice, and change the way society and science interact. This field poses important new and complex conceptual, methodological and systems challenges for evaluation. This panel describes current evaluation efforts of the newly established NIH Clinical and Translational Science Awards (CTSAs) initiative, a consortium that is expected by 2012 to include 60 academic centers linked nationally to connect academic institutions and communities of practice in health and medicine. The presentations will address conceptual issues (how to define and operationalize translational research), methodological issues (the use of social network analysis to assess how diffusion of innovation changes over time in communities of practice) and systems issues (how business and management methods can help create an effective 'program of evaluation). The implications of this work for evaluation policy and practice generally will be discussed.
Translational Research: Can Usable Categories be Created?
Ann Dozier,  University of Rochester,  ann_dozier@urmc.rochester.edu
Stephen Lurie,  University of Rochester,  stephen_lurie@urmc.rochester.edu
Camille Martina,  University of Rochester,  camille_martina@urmc.rochester.edu
Thomas Fogg,  University of Rochester,  thomas_fogg@urmc.rochester.edu
Thomas Pearson,  University of Rochester,  thomas_pearson@urmc.rochester.edu
Translational research is increasingly discussed in the literature, often in the context of NIH's Clinical and Translational Research Award (CTSA) initiative. While referring in broad terms to different types of translation research there is no consensus as to the number of types or their definitions. To make these definitions meaningful for an evaluator, administrator or researcher, a classification schema is warranted but developing a usable one to categorize actual research endeavors presents significant challenges. Through an ongoing research resource inventory at a CTSA funded institution all investigators biennially categorize each of their research projects across pre-defined fields (e.g. geographic scope, life stage, international disease classification). In 2007 fields describing translational research categories (not labeled as such) were added. Over 1000 research projects were classified. Test-retest reliability assessment with a sample of 50 researchers representing the inventory's three categories of translational research demonstrated that some investigators had difficulty interpreting the categories.
Evaluation of Communities of Practice and Diffusion of Innovation in Translational Science
Abigail Cohen,  University of Pennsylvania,  abigailc@mail.med.upenn.edu
The University of Pennsylvania's Clinical and Translational Science Award (CTSA) has forged a complex multi-institutional 'academic home' for clinical and translational research between Penn, the Children's Hospital of Philadelphia, the Wistar Institute and the University of the Sciences in Philadelphia to foster interdisciplinary science from discovery of new molecules to the study of drug action in large populations. As part of a multifaceted approach to evaluating this program, Social Networking Analysis will be used to measure changes over time, and determine whether or not the centers, cores and programs show patterns of connectedness across the alliance and whether these nodes and ties result in higher productivity and greater numbers of work products. Analyses use existing databases (e.g., publication citations) and survey data and are intended to evaluate how diffusion of innovation advances by examining the alliance of networks and their role in influencing the spread of new ideas and practices.
Evaluation and Metaevaluation as Program, Project and Sub-project: Designing and Implementing the Evaluation of a Clinical and Translational Science Institute
Don Yarbrough,  University of Iowa,  d-yarbrough@uiowa.edu
Large institutional innovations with multiple components often need complex evaluations to serve numerous purposes and users. Using the example of a NIH-funded Center for Clinical and Translational Science, this paper conceptualizes an overall 'program of evaluation' using the business program/project management literature. In this conceptualization, each specified evaluation purpose has its owned linked evaluation subproject sharing some resources and activities with other evaluation subprojects. Individual evaluation subprojects focus on individual components, collaboration among components, overall governance, or evaluation capacity building and resource sharing. An important set of subprojects provide formative and summative metaevaluation of the evaluation subprojects to be sure that the individual subprojects are optimally efficient, effective and well-coordinated with each other and responsive to overall evaluation purposes and needs. The paper provides illustrations of how to incorporate collaborative approaches, evaluation capacity building, program theory and logic, and evaluation standards and guidelines into this project-based design.

 Return to Evaluation 2008

Add to Custom Program