2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Evaluating Contributions to Knowledge Translation for New Technologies or Medical Treatments
Multipaper Session 770 to be held in Texas D on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
John Reed,  Innovologie LLC, jreed@innovologie.com
Translating New Knowledge From Technology Based Research Projects: An Intervention Evaluation Study
Presenter(s):
Vathsala Stone, State University of New York at Buffalo, vstone@buffalo.edu
Abstract: Obtaining societal benefits as impact from research that is sponsored through public funding has been a growing concern in recent years. This has led to Knowledge Translation (KT) as an emerging new field centered on promoting research impact through effective delivery of new knowledge; which underscores the urgency of evaluating research impact. This paper will address the issues of generating, and of evaluating, research impact and will discuss evaluation quality related to both processes. As a position paper it is focused on translating new knowledge generated by applied research, specifically by technology based research projects. It will present a KT intervention study currently under implementation and evaluation at a federally funded center on knowledge translation for technology transfer at the University at Buffalo. It will present the proposed intervention strategy with its rationale; describe the intervention implementation and evaluation procedures, as well highlight quality features of the intervention evaluation.
Evolving a High-Quality Evaluation System for the National Institutes of Health’s (NIH) HIV/AIDS Clinical Trials Research Networks
Presenter(s):
Scott Rosas, Concept Systems Inc, srosas@conceptsystems.com
Jonathan Kagan, National Institutes of Health, jkagan@niaid.nih.gov
Jeffrey Schouten, Fred Hutchinson Cancer Research Center, jschoute@fhcrc.org
William M Trochim, Cornell University, wmt1@cornell.edu
Abstract: A collaboratively-authored evaluation framework identifying success factors across four domains was used to guide the evaluation of NIH’s HIV/AIDS clinical trials networks. In each domain, broad evaluation questions were addressed by pilot studies conducted to generate information about a particular element of the research enterprise. Viewed as a scientific process with iterative cycles of experimentation and revision designed to incrementally improve the quality of the overall evaluation system, these studies were expected to yield information utilized in the near-term to improve network functions, and update and advance the evaluation agenda as the state of knowledge evolves. This paper presents preliminary results of the evaluation studies in the four domains. Opportunities and challenges for conducting similar evaluation studies within large-scale research initiatives are highlighted. Implications for the next cycle of studies, integrative analyses of data to address cross-domain evaluation questions, and linkage to the original stakeholder constructed framework are also discussed.

 Return to Evaluation 2010

Add to Custom Program