2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Valuing the Importance of Design in Producing Valuable Outcomes
Multipaper Session 845 to be held in Santa Monica on Saturday, Nov 5, 8:00 AM to 9:30 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Guili Zhang,  East Carolina University, zhangg@ecu.edu
Increasing the Rigor of Evaluation Findings By Using Quasi-Experimental Research Designs
Presenter(s):
Elena Kirtcheva, Research for Better Schools, kirtcheva@rbs.org
Abstract: This paper presents the use of quasi-experimental research designs to rigorously evaluate the impact of a two-year professional development program on the gains in content knowledge of elementary and middle school science teachers. It presents a powerful method that enables efficient controlling for spurious relationships without increasing the cost of evaluations, and provides applicable advice to budget-conscious evaluators looking to develop rigorous studies. The paper presents the final findings from a professional development intervention to illustrate the use of the Non-Equivalent Dependent Variable (NEDV) and Non-Equivalent Groups (NEG) research designs in evaluation. It extends an earlier study of the same project by incorporating data and results from the second year of program implementation. Additionally, the paper includes a new section with practical guidance for evaluators considering the adoption of the presented methodology.
Evaluation in the Context of Lifecycles: 'A Place for Everything, Everything in its Place'
Presenter(s):
Jennifer Urban, Montclair State University, urbanj@mail.montclair.edu
Monica Hargraves, Cornell University, mjh51@cornell.edu
Claire Hebbard, Cornell University, cer17@cornell.edu
Marissa Burgermaster, Montclair State University, burgermaster@gmail.com
William Trochim, Cornell University, wmt1@cornell.edu
Abstract: One of the most vexing methodological debates of our time (and one of the most discouraging messages for practicing evaluators) is the idea that there is a 'gold standard' evaluation design (the randomized experiment) that is generally preferable to all others. This paper discusses the history of the phased clinical trial in medicine as an example of an evolutionary lifecycle model that situates the randomized experiment within a sequence of appropriately rigorous methodologies . In addition, we propose that programs can be situated within an evolutionary lifecycle according to their phase of development. Ideally, when conducting an evaluation, the lifecycle phase of the program will be aligned with the evaluation lifecycle. This paper describes our conceptualization of program and evaluation lifecycles and their alignment. It includes a discussion of practical approaches to determining lifecycle phases, the implications of non-alignment, and how an understanding of lifecycles can aid in evaluation planning.
Is the Better the Enemy Of The Good? A Comparison of Fixed- and Random-Effects Modeling in Effectiveness Research
Presenter(s):
James Derzon, Battelle Memorial Institute, derzonj@battelle.org
Ping Yu, Battelle Memorial Institute, yup@battelle.org
Bruce Ellis, Battelle Memorial Institute, ellis@battelle.org
Aaron Alford, Battelle, alforda@battelle.org
Carmen Arroyo, Substance Abuse and Mental Health Services Administration, carmen.arroyo@samsha.hhs.gov
Sharon Xiong, Battelle, xiongx@battelle.org
Abstract: One of the key challenges in conducting large-scale, multi-site, multilevel evaluation is the inability to conduct randomized control trials (or lack of comparison group) to assess the effectiveness of social interventions and to attribute the outcomes to the interventions of interest. Innovative use of advanced statistical designs guided by program theory models has been used in as an approach to overcome these challenges and has been recognized as a viable way to evaluate effectiveness of large-scale social interventions. Using data from the National Evaluation of the Safe Schools/Healthy Students Initiative (SS/HS), we examine the knowledge generation consequences of adopting the better random-effects modeling approach to the good fixed-effect method for controlling the moderators and identifying the mediators of SS/HS intervention effectiveness.

 Return to Evaluation 2011

Add to Custom Program