| Session Title: Evaluation in the Era of Evidence-based Prevention |
| Multipaper Session 330 to be held in Royale Conference Foyer on Thursday, November 8, 9:35 AM to 11:05 AM |
| Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG |
| Chair(s): |
| Nikki Bellamy, United States Department of Health and Human Services, nikki.bellamy@samsa.hhs.gov |
| Abstract: This session presents prepared papers that explore three important areas in which prevention evaluation is evolving. The prevention field has proliferated approaches for participants at different risk levels, policies addressing environmental contexts such as family and community, and strategies for different age levels. This proliferation of approaches and evaluation-based knowledge means that evaluation must become more useful for comparing disparate interventions, and for monitoring and improving the implementation of evidence-based practice. The first paper elaborates the Institute of Medicine framework of universal, selective and indicated prevention as a framework for inter-relating evaluation findings to identify relative effectiveness and efficiency in meeting common objectives. The second paper uses national surveillance data to create information on the differential severity of consequences related to different substances, and the implications for evaluating different prevention strategies. The third paper elaborates performance evaluation methods designed to ensure strong implementation of evidence-based programs and practice. |
| The Institute of Medicine Framework as a Meta-construct for Organizing and Using Evaluation Studies |
| J Fred Springer, EMT Associates Inc, fred@emt.org |
| The Institute of Medicine (IOM) categorization of prevention into universal, selective and indicated populations has been widely adopted in the prevention field, yet the terms are not precisely defined, systematically used to guide evaluation, or uniformly applied in practice. In this paper, the strong potential for the IOM categories to bring a unifying framework to currently fragmented strategies and practices in prevention is articulated and applied. The underlying implications of the IOM categories for identifying and recruiting participants, selecting interventions that are effective, anticipating attainable positive outcomes and avoiding potential unintended influences are explicated. The ways in which the framework will help to organize and compare evaluation findings of disparate interventions is highlighted, and implications for evaluation design within each category are discussed. Systematically applied, the IOM framework can be a valuable tool for creating a conceptually unified and evidence-based continuum of prevention services. |
| A Measure of Severity of Consequences for Evaluating Prevention Policy |
| Steve Shamblen, Pacific Institute for Research and Evaluation, sshamblen@pire.org |
| The focus of substance abuse prevention policy is to prevent the harmful health, legal, social and psychological consequences of abuse, yet there is an absence of systematic, comparative research examining the negative consequences that are experienced as a result of using specific substances. Further, techniques typically used for needs assessment (i.e., prevalence proportions) do not take into account the probability of experiencing a negative consequence as a result of using specific substances. An approximated severity index is proposed that estimates the probability of experiencing negative consequences as a result of using specific substances, and is comparable across substances. Data from national surveillance surveys (NSDUH, ADSS) are used to demonstrate these techniques. The findings suggest that substances typically considered priorities based on prevalence proportions are not the same substances that have a high probability of causing negative consequences. The rich policy implications of these findings are discussed. |
| Evaluation Techniques for Effectively Implementing and Adapting Evidence-based Programs and Practice |
| Elizabeth Harris, EMT Associates Inc, eharris@emt.org |
| Traditional evaluation designs emphasize generation of knowledge concerning whether interventions work. They focus on measuring outcomes and attributing cause. In an era of evidence-based practice, the emphasis of program evaluation should shift to generating information on program implementation, fidelity to design intentions, and need for adaptation. This paper contrasts the design of evaluation research for knowledge generation with a framework for performance evaluation for program improvement. The author presents the concepts, tools, and products that she has used in specific studies. The major components of the approach are a logic model designed to articulate the elements of an evidence-based approach, the logical organization and analysis of quantitative measures at the core of the performance evaluation system, the important uses of qualitative information to interpret the quantitative data, products that are important to planning and decisions for quality improvement, and ways of working effectively with program staff. |