2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Quantifying Threats to Validity
Panel Session 595 to be held in Santa Monica on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Patrick McKnight, George Mason University, pem725@gmail.com
Abstract: Evaluators often face various threats to validity due to relatively weak non-randomized designs or strong designs that deteriorate due to failed randomization, unexpected environmental changes, or other problems. Unfortunately, we do not know how large of an effect these threats impose upon our findings. The purpose of this panel is to discuss several studies aimed at estimating the largest effect possible for several threats to validity including selection bias, testing effects, and statistical artifacts. By estimating the largest effects possible from these threats, we might all be able to prioritize our efforts in order to protect against the largest and most likely threats to validity.
Appreciating Threats to Validity Using Experimental Designs and Simulation to Estimate Effect Sizes
Patrick McKnight, George Mason University, pem725@gmail.com
Several prominent evaluators argued for the importance of estimating effects for various threats. Despite these efforts, very little work has been done to help us all appreciate the magnitude and probability of threats to validity. The first presentation in this panel outlines the problems with threats to validity and how not knowing the effects of these threats lead us all to routine practices that may not be justifiable.
Estimating Testing Effects Via A Web-based Intervention
Simone Erchov, George Mason University, sfranz1@gmu.edu
Repeated testing often causes us some concern since respondents typically adopt response tendencies that do not adequately reflect the property we wish to measure. Unfortunately, we know little about how large of an effect may come from repeated testing. A simple weight loss program where participants repeatedly reported their weight along with several self-report items helped us estimate the effects of repeated testing. The following presentation provides some insights into how repeated testing leads to effects similar to interventions. Simply put, by measuring people repeatedly over time, they change and that change was attributable solely to the measurement since there was no intervention. Thus, repeated measurement served as an intervention. The implications of these findings will be discussed in detail.
Maximizing Selection Bias Effects: An Experimental and Simulation Study
Julius Najab, George Mason University, jnajab@gmu.edu
Selection bias stands as one of the most likely and troublesome threats to validity. Failed randomization, non-randomized studies, or small sample sizes often lead to non-equivalent groups - a situation Campbell and Stanley originally referred to as selection bias. To estimate the maximal effect possible from selection bias, we conducted several experiments where participants were purposely assigned to different groups based upon certain pre-treatment variables. The more relevant the selection variable, the larger the selection bias effect - just as we might expect. What was not expected was the magnitude of the effect from this threat. Overall, selection bias effects can be much larger than we ever expect and probably could account for all the effects observed in many evaluations. The implications for these findings will be discussed in detail.

 Return to Evaluation 2011

Add to Custom Program