Return to search form  

Session Title: Learning to Promote Quality Over Ideology for Methodology
Panel Session 301 to be held in International Ballroom A on Thursday, November 8, 9:35 AM to 11:05 AM
Sponsored by the Presidential Strand and the Quantitative Methods: Theory and Design TIG
Chair(s):
George Julnes,  Utah State University,  gjulnes@cc.usu.edu
Discussant(s):
Lois-ellin Datta,  Datta Analysis,  datta@ilhawaii.net
Abstract: Much of the controversy over methodology within AEA over the past several years has been driven by ideology. Even efforts to examine the implications of using different methods become involved in ideological debates. This panel offers perspectives on how we might learn from previous debates and move forward in promoting a commitment to quality in methodology that at least tempers, if not transcends, ideological conflicts. The goal is to strengthen the contribution of evaluation to improving society.
Missing in Action (MIA) in the Qualitative Versus Quantitative Wars
Henry M Levin,  Columbia University,  hl361@columbia.edu
Douglas Ready,  Columbia University,  ready@exchange.tc.columbia.edu
The war between advocates of qualitative vs. quantitative methods has led to an excess of vehemence and ideology. This presentation will attempt to deconstruct some of that rhetoric by demonstrating that the combat routinely sacrifices a major victim, that of quality in both types of studies. An attempt will be made to show the nature and source of collateral damage to quality in research by the bellicose advocacy represented by the qualitative-quantitative conflict.
Establishing Criteria for Rigor in Non-Randomized and Qualitative Outcome Designs
Debra Rog,  Westat,  debrarog@westat.org
Evaluators involved in conducting quantitative outcome studies, especially those incorporating randomized designs, have established criteria for assessing the extent to which the study has maintained rigor and has adequate internal validity. Departures from the randomized study place have less agreed-upon criteria for determining whether the studies are sufficiently rigorous to provide statements of causality. As Boruch has demonstrated, quasi-experimental designs often do not replicate the findings from randomized studies due to their vulnerability to threats to validity. However, what strategies do we have for determining when a quasi-experiment produces results that approach the validity of a randomized study? In other words, what methodological improvements are sufficient for bolstering a study's validity and how can we assess or prove that? Similarly, what criteria and standards exist for qualitative studies to ascertain their accuracy and precision? This paper will review what strategies exist for assessing the adequacy for non-randomized outcome designs; discuss work underway to improve our ability to judge the quality and rigor of different designs as well as strategies for accumulating evidence across nonrandomized designs such as single-subject designs; and outline steps to establishing bases for judging the validity and rigor of nonrandomized designs.
The Renaissance of Quasi-Experimentation
William Shadish,  University of California, Merced,  wshadish@ucmerced.edu
Quasi-experimentation has long been the stepchild of the experimental literature. Even Donald Campbell, who invented the term quasi-experimentation, said he preferred randomized experiments when they were feasible and ethical. During the last 10 years in particular, the randomized experiment has come to dominate applications of experimental methodology to find out what works. More recently, however, quasi-experimentation has experienced somewhat of a renaissance. That renaissance is due to two primary developments. First is the increasing use of the regression discontinuity design. That design has long been known to provide unbiased estimates of effects under some conditions, but it has languished mostly out of site and out of mind since its invention 40 years ago. More recently, however, RDD has become popular among economists who have revitalized both its use and its analysis to provide better estimates of the effects of interventions. The second development is the use of propensity scores in nonrandomized experiments to provide better estimates of effects. For both of these developments, empirical studies suggest that they can provide estimates of effects that are as good as those from randomized experiments. While we still have much to learn about the conditions under which this optimistic conclusion might hold, nonetheless it seems likely that quasi-experimental methodology and analysis will begin to play a much stronger role in providing evidence about what works than has been the case in the last several decades.
Working Towards a Balance of Values in Promoting Methods in Evaluation
George Julnes,  Utah State University,  gjulnes@cc.usu.edu
Promoting quality in methods is important in evaluation, but it is so because we believe that the use of quality methods will lead to better social outcomes. Such a benevolent linkage, however, demands much with regard to the effective functioning of our evaluation community. In addition to the necessary theoretical and methodological developments, we also need the pragmatic skills to advance our craft. This paper addresses these issues and offers suggestions for promoting methodology in support of social betterment.
Search Form