|
Session Title: Analysis and Evaluation of Research Portfolios Using Quantitative Science Metrics: Practice
|
|
Panel Session 231 to be held in Texas D on Thursday, Nov 11, 9:15 AM to 10:45 AM
|
|
Sponsored by the Research, Technology, and Development Evaluation TIG
|
| Chair(s): |
| Laurel Haak, Discovery Logic, laurel.haak@thomsonreuters.com
|
| Abstract:
Increasingly organizations involved in research and technology development are interested in applying quantitative approaches to evaluate research program impact on participants and to assess whether programs are achieving their stated mission. Science metrics can be leveraged to complement qualitative evaluation methodologies and include bibliometrics, or the use of publication and citation information to derive measures of performance and quality, and other direct measures such as funding amounts and public health impact. In this panel, practical applications of applying metrics to the evaluation of research programs will be discussed. In particular we will discuss the use of bibliometrics in different evaluation settings, the development of novel metrics to address evaluation goals, and the use of metrics that accommodate differences in the temporal aspect of research portfolio outcomes.
|
|
Practical Applications of Bibliometrics: What Makes Sense in Different Contexts?
|
| Frédéric Bertrand, Science-Metrix Corp, frederic.bertrand@science-metrix.com
|
| David Campbell, Science-Metrix Corp, david.campbell@science-metrix.com
|
|
The production of performance measures using bibliometric methods has been proven effective to help answer important research evaluation and science policy questions. The design of bibliometric methods is critical and must be carefully adapted both to the organizational and research contexts, and to the overall evaluation analytical framework. There also is an opportunity to better combine and integrate bibliometrics with other evaluation methods. It is therefore important for the evaluation community to better understand the range of applications and limitations associated with bibliometric methods in different contexts. This paper presents bibliometric methods and associated analytical frameworks used to support evaluation processes in three contexts: 1) research funding organizations, 2) academic institutions, and 3) science-based governmental organizations (mandated/policy driven-research). These contexts are exemplified using sample bibliometric analysis covering various scientific areas such as genomics, cancer research, environmental science and natural resources.
|
|
|
Beyond Bibliometrics: An International Program Evaluation for Building Research Capacity
|
| Liudmila Mikhailova, United States Army, lmikhailova@crdf.org
|
|
The paper focuses on the findings from an eight-country impact evaluation study of Regional Experimental Support Centers (RESC) in Eurasia and discusses quantitative metrics to measure R&D programs in an international context. With funding from the U.S. Department of State, CRDF has established 21 RESCs since 1997 to build research capabilities at scientific institutions and integrate research into university systems. We will discuss development and application of research capacity metrics featuring results at three levels: 1) institutional level – value to building research capacity; 2) regional level measuring the extent to which RESCs created a more attractive climate for economic activities (e.g., a food and drug testing facility in Yerevan, Armenia that facilitates the country’s imports and exports; and an environmental testing center in Baku, Azerbaijan that encourages responsible development of the country’s oil resources); and 3) knowledge production level that includes national and international grants and publications in peer-reviewed journals.
| |
|
Applying Metrics to Evaluate the Continuum of Research Outputs: Near to Long-term Impact
|
| Joshua Schnell, Discovery Logic, joshua.schnell@thomsonreuters.com
|
| Beth Masimore, Discovery Logic, beth.masimore@discoverylogic.com
|
| Laurel Haak, Discovery Logic, laurel.haak@thomsonreuters.com
|
| Matt Probus, Discovery Logic, matt.probus@thomsonreuters.com
|
| Michael Pollard, Discovery Logic, michael.pollard@thomsonreuters.com
|
|
The evaluation of research programs relies on the use of varied measures of research outputs and critical to the success of these evaluations is the quality and comprehensiveness of data available that capture these outputs. ScienceWire® is an data and software platform that integrates and interlinks public and proprietary data on research and its outcomes, including: publications indexed by MEDLINE and Thomson Reuters’ Web of Knowledge; Federal Research Grants from the National Institutes of Health, the National Science Foundation, the Department of Energy, the Department of Defense, and the US Department of Agriculture; patent applications and issued patents from the US Patent and Trademark Office; and approved drug products from the FDA Orange Book. We will demonstrate the practical applications of this integrated data platform in evaluating near and long-term research outputs, from grant applications to research outputs to impacts to patient care.
| |