Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Reviewing Three Statistical Techniques and Their Applications for Evaluation Research: Using Propensity Score Matching (PSM), Hierarchical Linear Models (HLM), and Missing Data Techniques
Multipaper Session 630 to be held in Sebastian Section I2 on Friday, Nov 13, 4:30 PM to 6:00 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Allan Porowski, ICF International, aporowski@icfi.com
Discussant(s):
John Hitchcock, Ohio University, jhitchcoc@ohio.edu
Abstract: Evaluation research is often only as good as the data. That said, this multi-paper session introduces the use of three statistical techniques that provide alternatives when traditional experimental studies break down. These techniques also help evaluators to tame bad or incomplete data. Propensity Score Matching (PSM) can help evaluators to create an equivalent control groups in situations where they do not have natural control groups. Hierarchical Linear Models (HLM) help evaluators overcome issues of dependence and allow evaluators to examine their subjects while controlling for group level membership (e.g., schools, neighborhoods). Finally, missing data techniques can help to preserve data in cases where records would be lost while also promoting statistical power. These statistical techniques are commonly used in evaluation research today, and the presenters will focus on the benefits and drawbacks of each technique in examples of evaluation research.
The Use of Several Propensity Score Matching Techniques in the Evaluation of Educational Interventions
Aikaterini Passa, ICF International, apassa@icfi.com
Jing Sun, ICF International, jsun@icfi.com
An evaluation for the Communities In Schools (CIS) program in Texas implemented two types of propensity score matching techniques for constructing comparison groups. These methods allowed us to conduct a highly rigorous study at the school and the student level, respectively. In this evaluation, we sought to quantify the impact of the Communities in Schools' network in Texas on several academic and behavioral outcomes across elementary, middle and high schools by (a) matching CIS schools with other schools on several school characteristics using optimal matching, and (b) matching students that were exposed to CIS model to comparable students that did not, combining nearest-neighbor and exact matching on 12 student characteristics. This presentation will provide an overview of our methods and the implications for future research on school-based interventions for at-risk students.
Examining School-Based Intervention Programs in a Multilevel Context: Using Hierarchical Linear Modeling (HLM) in Evaluation Research
Frances Burden, ICF International, fburden@icfi.com
Kazuaki Uekawa, ICF International, kuekawa@icfi.com
This presentation examines student outcomes in students enrolled in Communities In Schools (CIS) in Texas using hierarchical linear models (HLM). HLM is a statistical set of models that enables one to understand student level effects, while controlling for the different contexts of schools. Using the CIS of Texas evaluation, this presentation focuses on several HLM models, including matched student comparisons, CIS-only student models, and growth curve models. This presentation compares and contrasts the benefits and weaknesses of these three types of models, their outcomes, and discusses the different insights each model provided for the CIS evaluation.
The Application of Missing Data Techniques in a School-based Intervention Evaluation: Evaluating Multiple Imputation, Maximum Likelihood and Listwise Deletion in an RCT context
John Hitchcock, Ohio University, jhitchcoc@ohio.edu
Frances Burden, ICF International, fburden@icfi.com
Kelle Basta, ICF International, kbasta@icfi.com
In the course of completing a school-based intervention evaluation for the National Institute of Justice, it became necessary to address the substantial levels of missing data across a lengthy student survey. Although the response rate for students was high, approximately 40 percent of the data would have been lost through standard methods of deleting any student records that were not complete (i.e., listwise deletion). Deleting 40 percent of the respondents from the final analyses would have considerably reduced power and threatened internal and external validity; therefore, both multiple imputation and maximum likelihood missing data techniques were applied to the data to conserve the number of respondents. This presentation focuses on the merits and drawbacks of each of these missing data techniques and compares multiple imputation, maximum likelihood, and listwise deletion in the context of this school-based evaluation.

 Return to Evaluation 2009

Add to Custom Program