2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Back to the Basics and Beyond
Multipaper Session 220 to be held in PRESIDIO A on Thursday, Nov 11, 9:15 AM to 10:45 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Raymond Hart,  Georgia State University, rhart@gsu.edu
Consequences of Violating Fundamental Assumptions of the Central Limit Theorem in Evaluation and Research Practice
Presenter(s):
Raymond Hart, Georgia State University, rhart@gsu.edu
Abstract: Recent trends in national evaluations and research projects have used the central limit theorem as a basis for identifying classrooms, schools, and schools districts as statistical outliers on various dependent variables. These studies often ignore or overlook a fundamental assumption in the application of the Central Limit Theorem that the distribution of students to classrooms, schools, and schools districts must be random. In practice, the distribution of students across educational entities and programs is systematic based on social economic status, ability, or other variables. This paper uses simulated data to illustrate the increased likelihood of identifying classrooms, schools, or school districts as statistical outliers when the distribution of students is systematically drawn from a restricted range of the normal distribution. The paper also provides practical examples the social, political and financial implications when statistical conclusions are based on inaccurate application of the Central Limit Theorem.
The Chi-Square Test: Often Used and More Often Misinterpreted
Presenter(s):
Todd Franke, University of California, Los Angeles, tfranke@ucla.edu
Christina Christie, University of California. Los Angeles, tina.christie@ucla.edu
Abstract: The “chi - square test” or more appropriately the 3 or possibly 4 tests that get referred to as the “chi - square test,” represent one of the most common statistical procedures utilized by evaluators for examining categorical data. While the calculations are identical, the circumstances under which the Chi Square test of independence is appropriate compared to the Chi Square test of homogeneity is often misunderstood by evaluators and leads to a diminished quality of evaluation reports. This proposal will examine the use of the family of chi-square based tests across several of the evaluation journals (e.g., Evaluation Review, Journal of the American Evaluation Association. New Directions), identify examples of use and misuse and present information to clarify the correct usage of each of these chi-square tests and the subsequent interpretation of the results. Finally the presentation will discuss the appropriate use of post hoc comparisons procedures for the Chi-square test of homogeneity.
Data Reduction and Classification Decisions: Using a Factor Analytic Approach to Examine Exemplary Teaching Characteristics
Presenter(s):
Sheryl Hodge, Kansas State University, shodge@ksu.edu
Jan Middendorf, Kansas State University, jmiddend@ksu.edu
Linda Thurston, National Science Foundation, lthursto@nsf.gov
Cindi Dunn, Kansas State University, ckdunn@ksu.edu
Abstract: Utilizing data gathered from a previously administered electronic data collection endeavor, evaluators at the Office of Educational Innovation and Evaluation (OEIE) sought to test whether substantive underlying themes were being masked within the larger Exemplary Teaching Characteristics instrument. Following Dillman’s tailored design method (Dillman, 2007), OEIE administered the Web-based Exemplary Teacher Characteristics Survey to 6,044 professional educators. Soon after, a previously identified expert teacher professional development cadre convened to disaggregate survey items into distinguishable professional development levels. Using three distinct exploratory factor analysis approaches, OEIE framed decision-making parameters used to improve the overall quality of the measure. One of these strategies emerged, in corroboration with the experts, to provide further statistical evidence of construct validity. The decision-making processes, as well as the interpretations and identification of the rotated pattern matrices frame the lessons learned for evaluation practice.
Estimating Program Impact Using the Bloom Adjustment for Treatment No-Shows: Evaluation of a Literacy Intervention With Hierarchical Linear Modeling
Presenter(s):
Jing Zhu, Metis Associates, jzhu@metisassoc.com
Jonathan Tunik, Metis Associates, jtunik@metisassoc.com
Alan Simon, Metis Associates, asimon@metisassoc.com
Abstract: This study estimates program impact in a randomized controlled trial (RCT) study in which a substantial proportion of treatment students with outcome data do not actually receive the intervention. In RCT studies, researchers typically analyze intention-to-treat (ITT) samples to preserve randomization. Because of treatment “no-shows”, however, ITT analyses tend to underestimate the treatment effect on those who do receive the intervention as intended. The Bloom adjustment is generally considered a useful approach to convert an ITT estimate into a treatment-on-the-treated (TOT) estimate based on a key assumption that no-shows experience zero impact from the intervention. The present study applies the Bloom adjustment to adjust both the impact estimate and its standard error in an impact study of a literacy program using hierarchical linear modeling. The adjusted TOT estimate is compared to the ITT estimate regarding magnitude and statistical significance. General issues in applying the Bloom adjustment are also discussed.

 Return to Evaluation 2010

Add to Custom Program