|
The Effectiveness of Mandatory-Random Student Drug Testing
|
| Presenter(s):
|
| Susanne James-Burdumy, Mathematica Policy Research, sjames-burdumy@mathematica-mpr.com
|
| Brian Goesling, Mathematica Policy Research, bgoesling@mathematica-mpr.com
|
| John Deke, Mathematica Policy Research, jdeke@mathematica-mpr.com
|
| Eric Einspruch, RMC Research, eeinspruch@rmccorp.com
|
| Abstract:
The Mandatory-Random Student Drug Testing (MRSDT) Impact Evaluation tested the effectiveness of MRSDT in 7 school districts and 36 high schools in the United States. The study is based on a rigorous experimental design that involved randomly assigning schools to a treatment group that implemented MRSDT or to a control group that delayed implementation of MRSDT. To assess the effects of MRSDT on students, we administered student surveys at baseline and follow up, collected school records data, conducted interviews of school and district staff, and collected data on drug test results. Over 4,000 students were included in the study. The presentation will focus on the study's findings after the MRSDT programs had been implemented for one school year.
|
|
Striving for Balance: The Value of Publishing Rigorous Studies with Insignificant Findings
|
| Presenter(s):
|
| Jill Feldman, Research for Better Schools, feldman@rbs.org
|
| Debra Coffey, Research for Better Schools, coffey@rbs.org
|
| Ning Rui, Research for Better Schools, rui@rbs.org
|
| Allen Schenck, RMC Corporation, schencka@rmcarl.com
|
| Abstract:
Impact analyses from a four-year experimental study of using READ 180, an intervention targeting adolescent struggling readers, showed no differences in reading performance between students assigned to treatment or control. Furthermore, results were consistent when data were analyzed by grade level, regardless of whether students had one or two years of treatment in any study year. In a related study, researchers explored whether students in different ability subgroups benefited from participation in READ 180. In addition to the lack of significant findings from the RCT, results failed to reveal subgroups of students for whom READ 180 worked better than the district's regular instruction. The presentation will conclude with a discussion about the value to research consumers of publishing rigorously designed studies when findings suggest a program does not work better than current practices, especially when the stakes for students and for society are high.
|
|
How to Train Your Dragon: One Story of Using a Quasi-Experimental Design Element in a School-Based Evaluation Study
|
| Presenter(s):
|
| Tamara M Walser, University of North Carolina, Wilmington, walsert@uncw.edu
|
| Michele A Parker, University of North Carolina, Wilmington, parkerma@uncw.edu
|
| Abstract:
Given current requirements under No Child Left Behind (NCLB) for the implementation of educational programs supported by scientifically based research and the U.S. Department of Education's related priority for experimental and quasi-experimental studies in federal grant competitions, it is important that education evaluators (a) identify evaluation designs that meet the needs of the current NCLB climate, (b) address evaluation questions of causal inference, and (c) implement designs that are feasible and ethical for school-based evaluation. The purpose of this presentation is to describe the use of a quasi-experimental design element as part of a larger longitudinal evaluation study. Specifically, we focus on implementation of the design element including issues encountered; the success of the design in terms of internal and external validity; and lessons learned and related implications for school-based evaluation practice, as well as evaluation practice in general.
|
| | |