|
Strategies and Lessons Learned from Implementing External Peer Review Panels Online: A Case Example from a National Research Center
|
| Presenter(s):
|
| Daniela Schroeter, Western Michigan University, daniela.schroeter@wmich.edu
|
| Kelly Robertson, Western Michigan University, kelly.robertson@wmich.edu
|
| Chris Coryn, Western Michigan University, chris.coryn@wmich.edu
|
| Richard Zinser, Western Michigan University, richard.zinser@wmich.edu
|
| Abstract:
As part of evaluating a national research center's effectiveness and performance on Government Performance and Results Act (GPRA) measures, peer review panels are conducted annually. The purpose of the panel studies is to assess (a) the relevance of the research to practice and (b) the quality of disseminated products. Traditionally, peer review panels are implemented in face-to-face environments; however, to increase the feasibility of the annual study for the sponsors, panelists, and evaluators, the panels are implemented using synchronous and asynchronous means of communicating online. Strategies used and lessons learned over three iterations of the panel studies are the focus of this presentation with specific attention to (a) training, calibrating, and preparing panelists for each study; (b) asynchronous independent rating procedures; and (c) effective synchronous deliberation procedures.
|
|
Can Traditional Research and Development Evaluation Methods Be Used for Evaluating High-Risk, High-Reward Research Programs?
|
| Presenter(s):
|
| Mary Beth Hughes, Science and Technology Policy Institute, m.hughes@gmail.com
|
| Elizabeth Lee, Science and Technology Policy Institute, elee@ida.org
|
| Abstract:
Over the last several years, the scientific community has seen a growth of non-traditional research programs that aim to fund scientists and projects of a 'high-risk, high-reward' nature. To date, evaluations of these programs have continued to use standard evaluation methods such as expert review and bibliometrics. Use of these standard methods, however, is predicated on a set of assumptions that may not be valid in the context of high-risk, high-reward programs. This paper presents the logic underlying several standard evaluation methods, describes a typology of high-risk, high-reward research programs, and critically assesses where standard methods are applicable to high-risk, high-reward research programs and where they fail. Examples where applicable are drawn from recent evaluations of high-risk, high-reward research programs at the National Institutes of Health and the National Science Foundation.
|
| |