Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Improving Evaluation Design and Methods: Examples for Research Programs
Multipaper Session 672 to be held in Room 112 in the Convention Center on Friday, Nov 7, 3:25 PM to 4:10 PM
Sponsored by the Research, Technology, and Development Evaluation TIG
Chair(s):
Stephanie Shipp,  Science and Technology Policy Institute,  sshipp@ida.org
The Role of Case Studies in Evaluation of Research Technology and Development Programs
Presenter(s):
George Teather,  Performance Management Network Inc,  george.teather@pmn.net
Steve Montague,  Performance Management Network Inc,  steve.montague@pmn.net
Abstract: Evaluation of RT&D programs requires the use of multiple lines of evidence in order to collect the information required to develop credible conclusions and develop useful recommendations. Case studies are one of the primary sources of evidence that is collected in many evaluation studies. Case studies can probe deeply into selected projects and trace the pathway from activities and outputs to early, intermediate and longer term outcomes. This paper will examine the strengths and limitations of case studies, and present examples of case studies carried out as part of recent evaluations of applied research projects.
Building Evaluation into Program Design: A Generic Evaluation Logic Model for Biomedical Research Programs at the National Cancer Institute
Presenter(s):
P Craig Boardman,  Science and Technology Policy Institute,  pboardma@ida.org
James Corrigan,  National Institutes of Health,  corrigan@mail.nih.gov
Lawrence S Solomon,  National Institutes of Health,  solomonl@mail.nih.gov
Christina Viola Srivastava,  Science and Technology Policy Institute,  cviola@ida.org
Kevin Wright,  National Institutes of Health,  wrightk@mail.nih.gov
Brian Zuckerman,  Science and Technology Policy Institute,  bzuckerm@ida.org
Abstract: Building evaluation into R&D program design is an ideal that requires program managers to identify beforehand clear program goals and corresponding outcome criteria for future assessments. This approach can be facilitated by the development of generic logic models, evaluative questions, and corresponding metrics. While some U.S. government agencies have developed such generic logic models (e.g., the USDA’s Extension Service and the DOE Office of Energy Efficiency and Renewable Energy), the National Cancer Institute (and by extension the National Institutes of Health) fund larger portfolios with a wide range of aims. We propose a framework for research evaluation that is both applicable to a broad range of program types across NCI and scalable to accommodate programs of different sizes and scope. Another distinguishing feature of the framework is that it is designed to elicit a thorough characterization of the “environment” of the program in addition to its motivations and goals.

 Return to Evaluation 2008

Add to Custom Program