|
Session Title: Research and Development Challenges to Meeting Government Performance Requirements
|
|
Panel Session 423 to be held in Room 110 in the Convention Center on Thursday, Nov 6, 4:30 PM to 6:00 PM
|
|
Sponsored by the Government Evaluation TIG
and the Research, Technology, and Development Evaluation TIG
|
| Chair(s): |
| Kathryn Law,
National Institutes of Health,
lawka@od.nih.gov
|
| Discussant(s):
|
| Deborah Duran,
National Institutes of Health,
durand@od.nih.gov
|
| Abstract:
The federal government seeks to fund effective programs that are able to achieve long term goals. However, current performance reporting methodologies are not sufficient to assess many programs, especially research and development (R&D) programs. This panel will discuss challenges faced by the federal government in assessing and managing R&D programs. These programs often conduct high risk, high reward research that may not achieve all of its proposed outcomes, but do yield unplanned results that guide the discovery process. Large scientific research initiatives also require new ways of thinking about evaluation. Aggregating the evaluation results of individual components does not yield a sound assessment of the entire program. These programs require new methodologies for adaptive and system assessments that can more appropriately capture the value of these initiatives. Finally, the challenges of incorporating assessment results into planning and decision making activities require the development of structures which encourage positive adaptive change.
|
|
High Risk, High Reward Research at the National Institutes of Health (NIH)
|
| Goutham Reddy,
National Institutes of Health,
reddygo@mail.nih.gov
|
|
High Risk/High Reward programs are challenged to meet the many performance reporting requirements. Current methodologies are unable to assess the expected adaptations needed to advance science. Initially, high risk innovative projects struggle to meet the planned goals; yet, properly adapt to the scientific discovery process by dynamically following the direction of good scientific discovery. Unplanned results emerge that guide the discovery process. There are no current assessment approaches that can properly determine project performance. Under the current approach of setting a planned annual milestone, then assessment of met / not met is inadequate. Furthermore, high reward projects are only determined after impact can be assessed, which can only be assessed post the end of the project. Current requirements do not enable follow-up reporting. Until these methodologies and policies can be developed, R&D programs must be allowed to use alternative strategies, such as adaptive annual measures with sound scientific justifications.
|
|
|
Evaluation Policy and Evaluation Practice for Large Scientific Research Initiatives
|
| William Trochim,
Cornell University,
wmt1@cornell.edu
|
|
The Clinical and Translational Science Awards (CTSAs) initiative is one of the largest scientific research efforts funded by the National Institutes of Health. It is designed to transform how clinical and translational research is conducted, ultimately enabling researchers to provide new treatments more efficiently and quickly to patients. The CTSA consortium currently includes 24 academic health centers (AHCs) located throughout the nation; by 2012, about 60 institutions will be linked together nationally. The CTSA has from its inception integrated evaluation into its efforts at multiple levels. This presentation describes the different types of evaluation policies that have been instituted or created - including requirement of a separate evaluation proposal in their center grant proposal; use of logic modeling; development of a national cross-center evaluation steering committee; and integration of performance and outcome evaluation - and considers the implementation and practice challenges of evaluating this type of large complex and adaptive research endeavor.
| |
|
What's the Use of Studying Science: Case - Using Profiling Analysis to Inform Science Management Decision Making
|
| Ken Ambrose,
National Institutes of Health,
ambrosek@mail.nih.gov
|
|
Required performance reporting often begins with self assessment. The results of these assessments can give support to current reporting and decision-making structures. However, if the conclusions present challenges to current practices, then such research becomes a disruptive influence in the social, political, and economic system of an organization. The challenge becomes one of creating safe harbors for change that help communities recognize existing structures, address the complexities of organizational change, and incorporate stakeholders values and concerns. This presentation uses the case of an analysis of scientist profiles in funding decisions to explore the role of infrastructure and incentives in managing change based on internal assessment. It also discusses the factors for communities to develop structures which encourage positive adaptive change.
| |