Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Challenging the Basics: The Relationship Between Evaluation Methods and Findings in Substance Abuse and Mental Health Studies
Multipaper Session 208 to be held in Capitol Ballroom Section 7 on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Alcohol, Drug Abuse, and Mental Health TIG
Chair(s):
Robert Hanson,  Health Canada,  robert_hanson@hc-sc-gc.ca
Resiliency: A Qualitative Meta-Synthesis
Presenter(s):
Scott Nebel,  University of Denver,  scott.nebel@du.edu
Abstract: Resiliency has become an increasingly popular phenomenon in the field of mental health and studies involving children confronted with some form of distress. Nonetheless, little consensus among researchers, clinicians, and evaluators seems to exist regarding what the term resiliency truly encompasses. Utilizing the qualitative methodology of meta-synthesis, this study analyzes, deconstructs, and synthesizes predominate qualitative studies on the issue of resiliency in an attempt to provide a more explicit understanding of the concept and in the hopes of delineating it from other recovery oriented terms and phenomena.
The Prevalence of Pseudoscientific Research Practices in the Evaluation of "Evidence-Based" Drug and Alcohol Prevention Programs
Presenter(s):
Dennis Gorman,  Texas A&M University,  gorman@srph.tamhsc.edu
Eugenia Conde,  Texas A&M University,  eugeniacd@tamu.edu
Brian Colwell,  Texas A&M University,  colwell@srph.tamhsc.edu
Abstract: Research has shown that evaluations of some of the most widely advocated school-based alcohol and drug prevention programs employ questionable practices in their data analysis and presentation. These practices include selective reporting among numerous outcome variables, changes in measurement scales before analysis, multiple subgroup analysis, post hoc sample refinement, and selective use of alpha levels above 0.05 and one-tailed significance tests. However, it is unclear just how widespread the use of such practices is within the overall field of alcohol and drug prevention since the focus of the existing critiques has been on a fairly narrow range of school-based programs. This presentation addresses this issue by reviewing the data analysis and presentation practices used in the evaluations of the 34 school-based programs that appear on the Substance Abuse and Mental Health Services Administration’s (SAMHSA) National Registry of Effective and Promising Programs.
Validating Self-Reports of Illegal Drug Use to Evaluate National Drug Control Policy: Taking the Data for a Spin
Presenter(s):
Stephen Magura,  Western Michigan University,  stephen.magura@wmich.edu
Abstract: Illicit drug use remains at high levels in the U.S. The federal Office of National Drug Control Policy evaluates the outcomes of national drug demand reduction policies by assessing changes in the levels of drug use, including measures of change from several federally-sponsored annual national surveys. The survey methods, relying exclusively on self-reported drug use (interviews or paper-and-paper), have been criticized by the Congressional General Accountability Office (GAO) as well as by independent experts. This analysis critiques a major validity study of self-reported drug use conducted by the federal government, showing that the favorable summary offered for public consumption is highly misleading. Specifically, the findings of the validity study, which compared self-reports with urine tests, are consistent with prior research showing that self-reports substantially underestimate drug use and can dramatically affect indicators of change. Thus, these national surveys are largely inadequate for evaluating national drug demand reduction policies and programs.
The Making of “Effective” Drug Prevention Programs through Pseudoscientific Research Practices: A Reanalysis of Data from the Drug Abuse Resistance Education (DARE) Program
Presenter(s):
J Charles Huber Jr,  Texas A&M University,  jchuber@srph.tamhsc.edu
Dennis Gorman,  Texas A&M University,  gorman@srph.tamhsc.edu
Abstract: Evaluations of some of the most widely advocated school-based alcohol, tobacco and other drug (ATOD) prevention programs have been shown to employ questionable practices in their data analysis and presentation such as selective reporting among numerous outcome variables, post hoc sample refinement, changes in measurement scales before analysis, multiple subgroup analysis, and selective use of alpha levels above 0.05 and one-tailed significance tests. This raises the question as to whether the use of such irregular and questionable data management and analysis practices results in significant overestimation of program effects. We address this issue by reanalyzing data from an independent evaluation conducted by Richard Clayton and colleagues of the Drug Abuse Resistance Education (DARE) program that produced null results. Specifically, we will determine whether the use of questionable data analysis and presentation practices can produce positive results when applied to this dataset.

 Return to Evaluation 2008

Add to Custom Program