Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Randomized Control Trials: Regression Discontinuity and a Poor Relative
Multipaper Session 811 to be held in Mineral Hall Section B on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Frederick Newman,  Florida International University,  newmanf@fiu.edu
Are the Group Randomized Trials funded by the Institute of Education Sciences Designed with Adequate Power?
Presenter(s):
Jessaca Spybrook,  Western Michigan University,  jessaca.spybrook@wmich.edu
Abstract: In federally sponsored education research, randomized trials, particularly those that randomize entire classrooms or schools, have been deemed as the most effective method for establishing strong evidence of the effectiveness of an intervention. However, the presence of group randomized trial does not necessarily produce reliable evidence of the effectiveness of a program. A key element that contributes to the capacity of a group randomized trial to yield high-quality evidence is adequate statistical power. This study examined the experimental designs and power of group randomized trials funded by the National Center for Education Research, a division of the Institute of Education Science. between 2002 and 2006. The findings revealed that blocked designs are the most common type of designs. In addition, the precision of the studies increased over the five year span, indicating that the quality of the designs is improving.
The Development, Application, and Use of a Retrospective Pre-Post Test Instrument in the Evaluation of an Educational Program for Academically Gifted High School Students
Presenter(s):
Debra Moore,  University of Pittsburgh,  ceac@pitt.edu
Cynthia Tananis,  University of Pittsburgh,  tananis@education.pitt.edu
Abstract: The Pennsylvania Governor’s School for International Studies (PGSIS) is a six-week summer program designed to give academically talented high school students a challenging introduction to the study of international affairs and global issues. One focus of the evaluation for this program seeks to understand the effect of the program on the students’ perception of their knowledge concerning these issues. Across the 23 year history of the program, a variety of measures were used (and subsequently discarded) to assess changes in knowledge and perception of competence. Four years ago the program decided to institute a retrospective pre-post design. Results from these years, clearly indicate that these students have consistently overestimated their pre-test understanding of core competencies emphasized in the program and that they are able to better assess their knowledge gains and their initial inflated sense of knowledge as a result of the program. This paper presents an overview of the development, application, use and analysis of a retrospective pre-post instrument to address response shift bias.
Developing a Body of Evidence Using Randomized Trials: Flexible Phases
Presenter(s):
John Gargani,  Gargani and Company Inc,  john@gcoinc.com
Abstract: Many organizations that advocate randomized trials, for example the NIH and IES, promote a phased approach to research. While the exact number and nature of the phases may vary, they typically include non-experimental studies followed by small randomized trials followed in turn by large randomized trials. Unfortunately, a phased approach cannot be applied effectively in many evaluation settings because programs, unlike medical or pharmaceutical interventions, may need to change frequently in response to funders and market demands. I present an alternative approach to organizing repeated randomized trials that I call flexible phases. I provide an example of how it has been applied to a series of randomized trials of a teacher professional development program conducted over seven years. I outline the merits and weaknesses of the approach, and discuss how flexible phases can be used to develop a body of evidence for programs and policies.
Getting More Mileage Using a Hybrid Design: Demonstrating the Utility of the Regression-Discontinuity and Meta Analysis (Rd-Ma) Amalgam to Evaluate Developmental Education Programs
Presenter(s):
Brian Moss,  Oakland Community College,  bgmoss@oaklandcc.edu
William Yeaton,  University of Michigan,  bill.yeaton@yahoo.com
Abstract: Researchers who encounter cut-score-based pretests oftentimes use the regression-discontinuity (RD) research design to evaluate program effectiveness. This presentation demonstrates the potential of combining the regression-discontinuity (RD) research design with meta-analysis (MA) to evaluate developmental education programs within higher education by creating the RD-MA amalgam. Using the sort of data which are readily available at all colleges and universities, approximately 10,000 students and 400 course sections in a large, Midwestern college are analyzed. Aggregate level measurements of students who receive developmental education are contrasted to non-developmental students in subsequent, college-level, social science courses. Framing the results within the new RD-MA amalgam allows for a clearer interpretation of the impact of the developmental program while controlling for potential instructor-related grading bias.

 Return to Evaluation 2008

Add to Custom Program