Return to search form  

Session Title: Rating Tools, Causation, and Performance Measurement
Multipaper Session 828 to be held in Calvert Ballroom Salon E on Saturday, November 10, 1:50 PM to 3:20 PM
Sponsored by the Government Evaluation TIG
Chair(s):
David J Bernstein,  Westat,  davidbernstein@westat.com
Causation in Federal Government Evaluations
Presenter(s):
Mina Zadeh,  United States Department of Health and Human Services,  mm_hz@yahoo.com
Abstract: This presentation will address “causation” in Federal government evaluations. Do high quality, high impact evaluations have to address causation in order to be deemed effective? This presentation will delve into the issue of causation in evaluations. It will use several high impact evaluations within the US Department of Health and Human Services to demonstrate that evaluations can be effective without addressing the causes of vulnerabilities that are identified within a program.
Selecting Measures for External Performance Accountability: Standards, Criteria, and Purpose
Presenter(s):
James Derzon,  Pacific Institute for Research and Evaluation,  jderzon@verizon.net
Abstract: Beginning with Congressional passage of the Government Performance and Results Act of 1993 (GPRA) and culminating in President Bush's implementation of the Office of Management and Budget's Program Assessment Rating Tool (PART), federal agencies are required to use performance measures to determine their overall effectiveness. As a management information technique borrowed from the management-by-objective (MBO) philosophy of total quality management, performance measures contribute to an information system by providing a narrow view of some critical aspects of a program's performance. However, it has proven difficult for many grant programs and agencies addressing human needs to demonstrate PART effectiveness. Using examples from an evaluation of the Americorps*NCCC PART performance measures, an instrument developed for evaluating performance measures for external reviewers will be introduced and criteria for evaluating performance measures will be distinguished from indicators useful for in-house program monitoring and program evaluation.
Evaluating an Evaluation Process: Lessons Learned From the Evaluation of the National Flood Insurance Program
Presenter(s):
Marc Shapiro,  Independent Consultant,  shapiro@urgrad.rochester.edu
Abstract: The NFIP underwrites 5.4 million policies worth over $1 trillion in assets with greater average annual outlays than Social Security. It is also one of the country's most complicated governmental programs providing the public good of risk information, managing floodplain risks, and also filling a market void of providing insurance the private market is reluctant to provide. It affects governments ranging from small communities to the national government, involves sometimes conflicting goals, and affects an array of stakeholders. Evaluating the 30-year old program was a complicated six-year process involving $5 million and 14 studies. In addition, toward the end of the evaluation, the hurricane seasons of 2004 and 2005 heightened attention to the previously obscure program, creating the potential for politicizing findings. This presentation discusses lessons learned including utilizing stakeholders, shaping client expectations, aligning program and evaluation goals, exploiting policy windows, and more.
Search Form