Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Out of Control? Selecting Comparison Groups for Analyzing National Institute of Health Grants and Grant Portfolios
Panel Session 906 to be held in Wekiwa 6 on Saturday, Nov 14, 3:30 PM to 5:00 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Christie Drew, National Institutes of Health, drewc@niehs.nih.gov
Abstract: Evaluators at the U.S. National Institutes of Health (NIH) are often called upon to assess the progress of a group of grantees or individuals who have received NIH support. To do this we select additional sets of grantees or individuals to serve as comparison groups. The purpose of this session is to explore recent approaches used to select meaningful comparison groups for analytical questions common at NIH and other science management environments. The examples are drawn from different NIH entities to provide a variety of contexts. The session focuses primarily on the methodology of comparison group selection rather than results of particular evaluations. Each speaker will briefly review their evaluation design, comparison group selection process, and analytical approach, paying particular attention to methodological challenges that affected the design or results. A discussion with the audience about the strengths and weaknesses of different approaches will follow the presentations.
Establishing a Comparison Set for Evaluating Unsolicited P01s at the National Institute of Environmental Health Sciences
Christie Drew, National Institutes of Health, drewc@niehs.nih.gov
Jerry Phelps, National Institutes of Health, phelpsj@niehs.nih.gov
Martha Barnes, National Institutes of Health, barnes@niehs.nih.gov
The unsolicited P01 mechanism at the National Institute of Environmental Health Sciences (NIEHS) is intended to fund multi-project investigator initiated research. In general, P01s are expected to be "greater than the sum of their parts." P01 projects have been funded for a wide range of years (with several over 35 years long) on a diverse range of scientific topics. A typical P01 consists of approximately three sub-projects roughly equal in cost to an R01 grant. Is it valid to expect the P01s to have three times the publications and citations per year of funding compared to comparable R01s? What is the best set of R01s for comparison? To address these questions, a mathematical matching algorithm was developed to identify scientifically relevant R01s for the recently active P01 portfolio. Program officers assisted in the final selection of comparison R01s. Key challenges were the varying lengths of the P01s, the range of scientific topics addressed, and the unique nature of several P01s.
It's A Small World After All: Describing and Assessing National Institutes of Health (NIH)-Funded Research in the Context of A Scientific Field
Sarah Glavin, National Institutes of Health, glavins@mail.nih.gov
Jamelle Banks, National Institutes of Health, banksj@mail.nih.gov
Paul Johnson, National Institutes of Health, pjohnson@mail.nih.gov
Although the U.S. National Institutes of Health (NIH) is the largest supporter of biomedical research in the world, most published research is not supported by NIH. Recent evaluations of NIH research centers programs have compared publications of NIH-supported researchers with publications across the same scientific field. Such an approach can allow the NIH to answer questions such as: (1) how do the specific research types and subareas supported by NIH compare with research being published in the field generally? (2) what journals are used to disseminate research results from the NIH program, and how do those journals compare with those used in the field as a whole? and (3) what other organizations are supporting research in this area, and how does their research compare with the NIH program? However, identifying "the world" as a comparison group is a challenge. The presentation offers considerations for implementing this approach, including searching and sampling strategies and issues of how to interpret the results of the comparisons.
NIH Loan Repayment Program: Applying Regression Discontinuity to Assess Program Effect
Milton Hernandez, National Institutes of Health, mhernandez@niaid.nih.gov
Laure Haak, Discovery Logic, laurelh@discoverylogic.com
Rajan Munshi, Discovery Logic, rajanm@discoverylogic.com
Matt Probus, Discovery Logic, mattp@discoverylogic.com
NIH's Loan Repayment Program (LRP) repays educational loan debt for individuals who commit to conduct biomedical or behavioral research. A recent evaluation examined whether LRP awards are effective in their broad purpose of recruiting and retaining early-career health professionals in biomedical research careers. New LRP applicants between FY2003 and FY2007 were defined as the study cohort. Applicants and awardees on the "funding bubble" (the part of the distribution where applicants have a 50% chance of being funded or not funded) were identified. Regression discontinuity design was then used to examine the impact of receiving an LRP on subsequent involvement in the extramural NIH-funded workforce for funded and not funded LRP applicants. Subsequent involvement outcomes measured for the study included grant applications, grant awards, participation in grants in roles other than the Principal Investigator, and publications. This presentation will focus mainly on the strengths and weaknesses of the methods used to define the "funding bubble" and apply the regression discontinuity approach.
The Use of Propensity Scores in a Longitudinal Science Study of Minority Biomedical Research Support From the National Institute of General Medical Sciences
Mica Estrada-Hollenbeck, California State University San Marcos, mestrada@csusm.edu
Anna Woodcock, Purdue University, awoodcoc@psych.purdue.edu
David Merolla, Kent University, dmerolla@ken.edu
P Wesley Schultz, California State University San Marcos, psch@csusm.edu
The National Institute of General Medical Sciences (NIGMS) has promoted Minority Biomedical Research Support through a variety of mechanisms for many years. This presentation reports on a longitudinal evaluation of the Research Initiative for Scientific Enhancement (RISE). The goal of the evaluation was to determine the efficacy of the RISE program. A key challenge in quasi- experimental studies is estimating causal effects of the program because random assignment of participants to programs is not possible. Generating propensity scores allows researchers to correct for selection bias by only comparing treated subjects to comparable control subjects, thereby achieving unbiased estimates of treatment effects. This paper will describe how propensity scores provide a flexible method for determining intervention effects, and describe how scores were calculated from a variety of relevant predictor variables (e.g., gender, ethnicity, GPA, intention to stay in the sciences, etc.), which were then used to select a matched sample comparison group of non RISE participants.

 Return to Evaluation 2009

Add to Custom Program