Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Towards A Better PART: Re-analysis of Random Assignment National Evaluation of Upward Bound- Bayesian Techniques for Academic Competitiveness and SMART Grant Program; and Regression Discontinuity for Small Grants Evaluation
Multipaper Session 119 to be held in Room 110 in the Convention Center on Wednesday, Nov 5, 4:30 PM to 6:00 PM
Sponsored by the Government Evaluation TIG
Chair(s):
David Goodwin,  United States Department of Education,  david.goodwin@ed.gov
Discussant(s):
Laura Perna,  University of Pennsylvania,  lperna@gse.upenn.edu
Abstract: The Government Performance Results Act (GPRA) and the Office of Management and Budget (OMB) Program Accountability Rating Tool (PART) reflect increasing demands that federal programs to provide rigorous evidence of effectiveness. Using critical analysis and multiple methods, we explore design, implementation, and measurement issues that arise in three different types of studies with different methods. The session includes an exploration of measurement error and a re-analyses of the controversial random assignment National Evaluation of Upward Bound that leads to a different conclusion than that originally reached in the PART process; an exploration of what Bayesian methods can contribute to the quasi-experimental observational study of the effectiveness of the Academic Competitiveness Grants (ACG) and National SMART Grants (NSG) program; and consideration of the potential of using regression-discontinuity designs for evaluating small national programs. We invite discussion concerning how to use evaluations to increase program effectiveness, Federal accountability, and desired social change.
Exploring Measurement Error in the Random Assignment 1992-93- 2003-04 National Evaluation of Upward Bound--Do Alternative Analyses Change the Conclusions?
Margaret Cahalan,  United States Department of Education,  margaret.cahalan@ed.gov
We take a critical look at measurement error relative to findings from the National Evaluation of Upward Bound (UB) and explore lessons learned from a policy and methodological perspective. The nationally representative random assignment study followed a multi-grade cohort from 1992-93 to 2003-04. Following reports of lack of overall positive effects, OMB gave UB an ineffective PART rating and budget recommendations in FY05 and FY06 called for zero funding. Major issues include: 1) unequal weighting; 2) treatment-control group equivalency; 3) survey non-response bias; 4) lack of standardization for expected high school graduation year (EHSGY); and 5) service substitution and dropout issues. Our major finding is that when administrative records are used to supplement survey data and outcomes standardized by EHSGY, contrary to previously published reports, the UB program demonstrated statistically significant positive effects on the major goals of the program; postsecondary entrance, application for financial aid; and attainment of postsecondary credentials.
Using the Regression-Discontinuity Design For Measuring the Impact of Federal Discretionary Grant Programs for OMB's PART
Jay Noell,  United States Department of Education,  jay.noell@ed.gov
Federal programs are required by OMB to complete a PART (Program Assessment Rating Tool) for use in performance budgeting. Programs are given a numerical score (and an associated judgment) based on their PART, and 50 percent of the score depends upon a program's performance results. Many smaller discretionary grant programs are unable to produce credible evidence of effectiveness, and can be judged ineffective, which can result in OMB proposing reducing or eliminating their budgets. This paper describes a way that many of those programs could be evaluated using a regression-discontinuity research design (RDD). The advantage of the RDD is that it provides a basis for making unbiased causal inferences about program effectiveness when evaluations using randomized control trials (RCT) are not possible. A number of discretionary grant programs funded through the U.S. Department of Education and other federal agencies could be evaluated this way. But with what results?
A Comparison of Bayesian and Standard Methods in Evaluating the American Competitiveness (ACG) and National SMART Grant (NSG) Programs
Sharon Stout,  United States Department of Education,  sharon.stout@ed.gov
What are the challenges of using national data sets and financial aid award data to measure trends and interruptions in trends over time that may be attributed to new legislation? How can Bayesian statistical techniques contribute to these analyses? The ACG and NSG Programs increase Pell Grant awards to provide additional incentives to eligible students to encourage them to take a more rigorous program of study in high school; or as 3rd and 4th year students, to major in mathematics, science, or critical foreign languages. Using national surveys and transcript study data as well as federal student aid Pell award databases, we apply Bayesian statistical techniques and consider how to model the data appropriately to generate efficient and robust inferences making full use of quantitative and structural prior information. These methods and their results are compared to standard methods.

 Return to Evaluation 2008

Add to Custom Program