Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Evaluation For Improving Occupational Safety: Examples From a Federally Funded Government-Industry Cooperative Program
Panel Session 321 to be held in the Agate Room Section C on Thursday, Nov 6, 1:40 PM to 3:10 PM
Sponsored by the Business and Industry TIG
Chair(s):
Michael Coplen,  Federal Railroad Administration,  michael.coplen@dot.gov
Abstract: The Federal Railroad Administration has embarked on an ambitious long term effort to improve safety and safety culture in the U.S. railroad industry by funding a series of innovative technological, analytical, behavioral and organizational based safety improvement programs. The effort encompasses building internal evaluation capacity at the FRA, funding evaluations of its innovative programs, changing its own internal organizational culture, and integrating evaluation into a its major new approach to safety known as the Risk Reduction Program.
Evaluation and Evaluation Capacity Building at the Federal Railroad Administration: History, Purpose, and Future
Michael Coplen,  Federal Railroad Administration,  michael.coplen@dot.gov
Evaluation and the AEA have evolved into a cross disciplinary field that includes many different applications and disciplines. Relatively little of the evaluation field, however, has been applied to occupational safety. The Federal Railroad Administration's Human Factors R&D Program is making that effort. We have embarked on an ambitious long term initiative to improve safety and safety culture in the U.S. railroad industry by funding a series of demonstration projects, and integrating various evaluation methods into each of the demonstrations. Pilot demonstrations emphasize innovative behavioral and organizational based safety improvement methods, incorporating evaluation methods to increase stakeholder involvement and improve utilization, impact and effectiveness. As part of this effort, we have also been building our own internal evaluation capacity. This presentation will summarize the evaluations being carried out and the efforts made to build capacity.
A Meta-logic Model for Evaluating Safety Programs: Uses and Limitations
Jonathan A Morell,  TechTeam Government Solutions,  jonny.morell@newvectors.net
Whenever evaluations of related programs are carried out, each program's respective logic model will have similarities and differences with the logic models of the other programs. The similarities can be captured in a meta-logic model which can be used to generate the unique models for each program. Some of the differences in the program-specific models will involve only low-level detail, while others will reflect important idiosyncratic differences among the programs. It is useful to think in terms of a hierarchy of logic models as a way to design evaluations whose findings can be compared, and as a way to understand differences in underlying theory of the different programs. This meta-modeling approach is useful not only as a way of testing program theory, but also of understanding evolutionary, non-model based program change. These principles will be illustrated with examples from a series of evaluations of safety programs in the railroad industry.
Implementation Analysis and Safety Impact Assessment
Joyce Ranney,  Volpe National Transportation Systems Center,  joyce.ranney@volpe.dot.gov
The railroad industry, like other industries, has a bias towards implementing programs and when they don't deliver the desired results quickly, stopping the program and trying something else. This flavor of the month approach to safety improvement ensures minimal sustainable learning about safety risks and prevention. It also encourages skepticism and distrust on the part of labor as well as management. Given this bias and resulting limitations to learning and improvement implementation evaluation is one of the most important types of evaluation for the industry to learn about. The following will be presented: examples of data collection methods and feedback reports from the implementation evaluation efforts, forums that have been used to foster visibility and accountability on the part of the pilot sites and ground rules that have helped to foster open and useful dialogue about implementation effectiveness.
Integrating Evaluation for Organizational Change Into the Federal Railroad Administration's (FRA's) Risk Reduction Program
Miriam Kloeppel,  Federal Railroad Administration,  miriam.kloeppel@dot.gov
Michael Coplen,  Federal Railroad Administration,  michael.coplen@dot.gov
The relationships between industry management, labor, and the Federal Railroad Administration have evolved over time into a culture that does not foster open disclosure of safety-related information. Unlike regulatory programs, the Risk Reduction Program will engender collaborative development of data collection, analysis, modeling, and identification of corrective actions. This will be a major change for the FRA and for participating railroads. Development, implementation, and ongoing operation of the Risk Reduction Program will require evaluation on several levels. For example, changes to FRA organization and culture must be evaluated. Also, individual programs created as part of the Risk Reduction Program will generate internal-use data, as well as data to be shared with the FRA; both types must be subject to evaluation. And to promote an effective Nationwide program, the FRA will need to perform cross-project evaluations.

 Return to Evaluation 2008

Add to Custom Program