Return to search form  

Session Title: Innovative Approaches to Impact Assessments
Multipaper Session 391 to be held in Calhoun Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Keith Zvoch,  University of Nevada, Las Vegas,  zvochk@unlv.nevada.edu
Treatment Fidelity in Multi-site Evaluation: A Multi-level Longitudinal Examination of Provider Adherence Status and Change
Presenter(s):
Keith Zvoch,  University of Nevada, Las Vegas,  zvochk@unlv.nevada.edu
Lawrence Letourneau,  University of Nevada, Las Vegas,  letourn@unlv.nevada.edu
Abstract: Program implementation data obtained from the repeated observation of teachers delivering one of two early literacy programs to economically disadvantaged students in a large school district in the southwestern United States were analyzed with multilevel modeling techniques in order to estimate the status of and change in provider adherence to program protocol across the intervention period. Results indicated that fidelity to program protocol varied within and between treatment sites and across adherence outcomes. An exploratory examination of select provider and site characteristics indicated that the professional preparation of providers and the particular treatment intervention adopted were associated with fidelity outcomes. These results provide some insight into the range of factors that are associated with protocol adherence and highlight the challenge of achieving and maintaining fidelity to a treatment intervention that is delivered by multiple providers over multiple treatment sites. Consequences for evaluation theory, design, and practice are discussed.
Multiple Random Sampling When Treatment Units are Matched to Numerous Controls Using Propensity Scores
Presenter(s):
Shu Liang,  Oregon Department of Corrections,  shu.liang@doc.state.or.us
Paul Bellatty,  Oregon Department of Corrections,  paul.t.bellatty@doc.state.or.us
Abstract: The Oregon Department of Corrections offers various programs to minimize the likelihood of future criminal activity. These programs can be expensive and resources are limited. Recognizing the most effective programs and eliminating the ineffective programs is essential for more efficient use of limited resources. Randomly assigning inmates to treatment or control groups provides an operative means of quantifying program effectiveness, but pragmatic and ethical considerations often prohibit its application. Non-random designs, without accounting for treatment-control differences, can inappropriately attribute treatment-control differences to treatment effectiveness. Propensity score matching is a useful method for establishing group comparability with non-random designs. A highly refined matching process can eliminate too many from the treatment group. A less refined matching process may retain all treatment individuals but create too many matches for some treatment individuals. Multiple random sampling of one-to-one matches enables researchers to retain more treatment individuals while providing less biased estimates of program effectiveness.
A Quantitative Evaluation Utilizing the 'Ground Effect' Unit: Application to the Evaluation of Foreign Student Policy and Regional Cooperation Program
Presenter(s):
Yuriko Sato,  Tokyo Institute of Technology,  yusato@ryu.titech.ac.jp
Abstract: 'Ground effect' is the (weighted) average change of key indicators of the intervened group from a non-intervened control group, which can be measured by dividing the difference of the mean of key indicators of the intervened group (M) and that of the control group (M') by M', expressed in the following way: (M/M')-1=(M-M')/M' 'Ground effect' is dealt with as a unit and is given the unit name 'effect'. Impact will be calculated by multiplying the 'ground effect' and the intervened population, assuming that the impact is the sum total of the change brought about by the intervention in the target population. Efficiency is calculated by dividing this impact by the total input expressed in a monetary unit. Two cases applying this method will be introduced: the evaluation of Japan's Foreign Student Policy and that of a Regional Reproductive Health Program in the Philippines.
Alternatives Choices When a Comparison/Control Group is Desired but not Planned
Presenter(s):
Deborah Carran,  Johns Hopkins University,  dtcarran@jhu.edu
Stacey Dammann,  York College of Pennsylvania,  sdammann@ycp.edu
Abstract: Evaluations of programs/projects are often planned without benefit of a comparison or control group, resulting in reports lacking internal validity. Alternative methodologies that should be considered, include (a) reporting within the sample comparing participant characteristics, (b) comparing published results from similar programs/projects. This presentation will present examples. Reporting on within sample outcome differences requires establishing relevant characteristics that warrant comparison. Characteristics compared include demographic factors (i.e. experience) or program/project participation level. Two examples of within sample comparisons that have been used in published results will be presented. The second technique identifies published studies from similar program/projects for comparison purposes. For this technique, it is critical to establish similarity between comparison studies and target program/project. Two examples will be presented for this demonstration. The use of these types of nationally based studies with weighted results for comparison has been informative for comparison purposes.
Search Form