|
Session Title: Tools for Improving the Quality of Evaluations: Four Examples From the Field
|
|
Panel Session 862 to be held in Lone Star A on Saturday, Nov 13, 2:50 PM to 4:20 PM
|
|
Sponsored by the Presidential Strand
and the Environmental Program Evaluation TIG
|
| Chair(s): |
| Britta Johnson, United States Environmental Protection Agency, johnson.britta@epa.gov
|
| Abstract:
Like many federal agencies, evaluators at the U.S. Environmental Protection Agency (EPA) face a number of constraints that can hamper the quality of our evaluation efforts, including poor data quality, no data, and resource and time constraints. The U.S. EPA’s Evaluation Support Division has used several tools to help mitigate the impact of these constraints. Through the examination of four case studies, this panel session will provide practical examples that describe how evaluability assessment, expert/peer review, and integrating evaluation into the design of a program are valuable tools for improving: 1) the quality of measures, 2) data collection strategies and outcome data, 3) evaluation design, and 4) our understanding of the quality and availability of data for evaluation. This panel session will discuss how each tool was applied during the conduct of an evaluation and discuss which aspects of evaluation quality were improved.
|
|
Integrating Evaluation Into Program Design
|
| Matt Keene, United States Environmental Protection Agency, keene.matt@epa.gov
|
|
Building evaluation into the design of programs presents the U.S. Environmental Protection Agency (EPA) with opportunities to improve the quality of its evaluations. In cooperation with the Paint Product Stewardship Initiative (PPSI), the U.S. EPA established an evaluation committee to systematically integrate participatory evaluation into the design of the Oregon Paint Stewardship Pilot Program. In this presentation we review the process that the evaluation committee used to integrate evaluation into the program’s design; summarize the positive and negative affects on the development of questions and measures, evaluation design and data collection; and assess the challenges and benefits of working collaboratively to investigate the effectiveness and impact of management strategies. Finally, we draw a relationship between the history and status of this evaluation and some criteria that determine which programs warrant the resources necessary to build evaluation into their design…and which ones do not.
|
|
|
Using Evaluability Assessment to Understand Data Limitations and Help Design an Appropriate Evaluation
|
| Michelle Mandolia, United States Environmental Protection Agency, mandolia.michelle@epa.gov
|
|
In order to increase awareness of the rules and regulations governing the construction and operation of ethanol plants in its Region, EPA Region 7 staff published a compliance assistance manual for these facilities. Region 7 was interested in evaluating the success of the manual in improving industry compliance with relevant rules and regulations. To begin the evaluation, Region 7 requested that an evaluability assessment (EA) be conducted to determine if there was enough information available to answer the desired evaluation questions. The EA helped to identify the evaluation’s information collection plan given information availability and allowable and feasible information collection approaches. The results of this assessment were used to inform the more detailed evaluation methodology.
| |
|
Using Expert/Peer Review to Improve the Quality of an Evaluation Methodology: Tribal General Assistance Program Case Study
|
| Yvonne Watson, United States Environmental Protection Agency, watson.yvonne@epa.gov
|
| Tracy Dyke Redmond, Industrial Economics Inc, tdr@indecon.com
|
|
The primary purpose of the EPA’s Tribal General Assistance Program (GAP) is to help federally recognized tribes and intertribal consortia build the basic components of a tribal environmental program, which may include planning, developing, and establishing the administrative, technical, legal, enforcement, communication, and outreach infrastructure. An evaluation was conducted to determine how effective GAP has been in building Tribal environmental capacity. To improve the rigor and quality of the evaluation, two expert review panels were convened; the first academic and the second Tribal, to identify concerns with the methodology. As the evaluation advisor, Yvonne Watson will contribute to the session by providing an overview of the peer review process and the results of the academic peer review and a tribal peer review highlighting, similarities and differences in the reviews and the importance of using the peer review process to ensure cultural sensitivity in addressing concerns related to the quality of the evaluation.
| |
|
Using Expert/Peer Review to Improve the Quality of an Evaluation Methodology: Compliance Assistance Outcomes Case Study
|
| Terell Lasane, United States Environmental Protection Agency, lasane.terell@epa.gov
|
|
EPA provides compliance assistance to the regulated community, including local governments and tribes, to help them understand their regulatory obligations and to prevent violations. Some of the assistance activities lead to behavior changes that result in compliance improvements and environmental benefits. OECA’s Office of Compliance (OC) is implementing a pilot project to evaluate compliance assistance (CA) outcomes, using a quasi-experimental design to determine whether there is a statistically significant correlation between CA and behavior change for auto-body repair shops in Massachusetts that are offered CA for the new Clean Air Act Area Source Rule related to paints and coatings (Subpart 6H), and limited assistance on Resource Conservation and Recovery Act (RCRA) regulations. As the advisor for the evaluation, Terell Lasane will share the results of the expert review of the statistically valid pilot noting how the results of the feed back were instrumental in identifying how results were used to address data quality and design issues.
| |