| Session Title: New Approaches to Assessing National Institutes of Health (NIH) Research Programs |
| Multipaper Session 582 to be held in Malibu on Friday, Nov 4, 8:00 AM to 9:30 AM |
| Sponsored by the Research, Technology, and Development Evaluation TIG |
| Chair(s): |
| Robin Wagner, National Institutes of Health, wagnerr2@mail.nih.gov |
| Abstract: We present four papers on new approaches and tools that are being developed to inform the evaluation of research programs sponsored by the U.S. National Institutes of Health (NIH), which invests over $30 billion per year on biomedical research. The first paper considers how the traditional method of using expert opinion to assess research program performance has been implemented and can be enhanced. The second and third papers employ different text mining and visualization tools to characterize research portfolios to glean insights into the science supported and could facilitate the management of research programs. The fourth paper uses network analysis to evaluate and compare researcher collaborations in two distinct epidemiological cohort studies. While the examples presented in this session focus on NIH, the methods demonstrated can be extended to other organizations and countries seeking to better understand and inform strategies for managing their research programs. |
| Expert Opinion as a Performance Measure in R&D Evaluation |
| Kevin Wright, National Institutes of Health, wrightk@mail.nih.gov |
| Expert opinion continues to be the gold standard when assessing the performance, impact, and importance of R&D programs. This presentation will explore the use of expert opinion as the primary performance measure in evaluations of biomedical research programs. Questions that will be addressed include: 1) Is expert opinion really a performance measure?; 2) In what circumstances is expert opinion used as the primary performance measure in evaluations of biomedical R&D programs?; 3) What are strengths and limitations of expert opinion?; 4) What are various approaches to using expert opinion?; and 5) What are some good practices that might be considered when planning an evaluation using expert opinion? This presentation will be useful to evaluators interested in using expert opinion to evaluate R&D programs. |
| Text Mining for Visualization of Temporal Trends in NIH-Funded Research |
| L Samantha Ryan, National Institutes of Health, lindsey.ryan@nih.gov |
| Carl W McCabe, National Institutes of Health, carl.mccabe@nih.gov |
| Allan J Medwick, National Institutes of Health, allan.medwick@nih.gov |
| Using text mining methods, we will analyze and visualize topical changes in the abstracts of extramural research grants funded by the National Institutes of Health (NIH) over the past decade. The project will use publicly available data provided by the NIH (from RePORT), free and open-source analysis software, and a freely available visualization environment produced by Google (Motion Charts). Our methods will allow the user to interactively explore the first appearance and subsequent increase (or decrease) of substantive keywords in NIH abstracts over a temporal span and to see changes over time in animation. The corpus of abstracts may be subdivided into categories (e.g., fiscal year) in order for the user to explore and compare patterns and changes in NIH research funding. |
| Assessing Grant Portfolios Using Text-Mining and Visualization Methods |
| Elizabeth Ruben, National Institutes of Health, elizabeth.ruben@nih.gov |
| Kristianna Pettibone, National Institutes of Health, kristianna.pettibone@nih.gov |
| Jerry Phelps, National Institutes of Health, phelps@niehs.nih.gov |
| Christina Drew, National Institutes of Health, drewc@niehs.nih.gov |
| Granting agencies have an ongoing need for tools to assure that their portfolio of grants is current, mission-focused, and of high quality. Therefore we are exploring the novel use of a text-mining data visualization tool, OmniVizGäó, to examine patterns in the science distribution of our grants, analyze assignment of project officers, and identify gaps and emerging areas of research. We explore the effect of various options and choices, such as source data, number of clusters, or clustering method. We show examples of our data plots and describe how this could be used to think about the portfolios in new ways and inform our science management. Finally, we discuss the challenges and opportunities of these approaches. This presentation will be useful to evaluators interested in learning how to use visualization tools for data analysis and in understanding how the findings can be applied to science management. |
| Network Analysis of Collaboration Among National Heart, Lung and Blood Institute (NHLBI) Funded Researchers |
| Carl W McCabe, National Institutes of Health, carl.mccabe@nih.gov |
| Mona Puggal, National Institutes of Health, mona.pandey@nih.gov |
| Lindsay Pool, National Institutes of Health, |
| Rediet Berhane, National Institutes of Health, |
| Richard Fabsitz, National Institutes of Health, richard.fabsitz@nih.gov |
| Robin Wagner, National Institutes of Health, robin.wagner@nih.gov |
| We use freely-available, open-source analytical tools to explore co-authorship networks involving researchers funded by the NIH's National Heart Lung and Blood Institute (NHLBI). Underlying our analysis is an interest in the forms of collaboration that exist among researchers in two distinct cohort studies-The Cardiovascular Health Study and The Strong Heart Study. We use co-authorship as a proxy for collaboration, and we produce statistics and visualizations to help dissect the properties of these networks. To add further analytical dimension to the analysis, we examine aspects of network structure in relation to characteristics of the researchers and publications (e.g., institutional affiliation or publication title). Our presentation will use a step-by-step discussion of this project to illustrate some of the computational analysis tools and techniques that may be used to explore the concept of collaboration among a body of researchers. |