| Session Title: What Have We Learned About Education Programs, How Do We Use What We've Learned, and Where Do We Go From Here? |
| Multipaper Session 298 to be held in Sebastian Section I1 on Thursday, Nov 12, 1:40 PM to 3:10 PM |
| Sponsored by the Government Evaluation TIG and the Pre-K - 12 Educational Evaluation TIG |
| Chair(s): |
| Alan L Ginsburg, United States Department of Education, alan.ginsburg@ed.gov |
| Discussant(s): |
| David Goodwin, Bill & Melinda Gates Foundation, david.goodwin@gatesfoundation.org |
| Abstract: The Office of Management and Budget (OMB) developed the Program Accountability Rating Tool (PART) by executive order of President Bush. PART required federal programs to provide evidence of program effectiveness with their budget requests. PART questions addressed relevant evaluations; OMB emphasized randomized-control trials. With a new administration, the context has changed: President Obama has called for fundamentally reconfiguring the PART and suspended reviews. This session examines the use of education evaluations from differing perspectives and contexts, including a review of PART assessments for 130 Federal education programs, a review of 75 evaluations conducted for the Department of Education concerning their rigor and relevance, a model for translating research into improved practice, and a new model for evaluation based on a systematic review of evaluations for college access and retention programs. We invite discussion on using research, evaluations, and performance measures to increase program effectiveness, accountability, and desired social change. |
| Education Programs: Program Assessment Rating Tool (PART) Assessments, Evaluation, Performance Measures and Targets - Past Patterns and Future Directions |
| Sharon Stout, United States Department of Education, sharon.stout@ed.gov |
| Jay Noell, United States Department of Education, jay.noell@ed.gov |
| Margaret Cahalan, United States Department of Education, margaret.cahalan@ed.gov |
| Zeest Haider, United States Department of Education, zh2124@barnard.edu |
| This paper reviews PART assessments for 130 federal education programs through 2008: 94 were Department of Education programs, with the remainder spread across 14 other agencies (including Defense, Health and Human Services, Interior, Labor, and the National Science Foundation). OMB's website, www.ExpectMore.gov, reports type of program, overall ratings, scores, answers to PART questions, and performance measures and targets. To this was added information from agency strategic plans, goals, and budgets. The analysis compares how programs were structured, and summarizes patterns within and across agencies. Program types and structures are analyzed with measures, targets, and scores on the PART components of program purpose and design, strategic planning, program management, and program results and accountability. In considering how linkages across education programs (and agencies) might be made more strategic and cross-cutting, suggestions are made to improve performance measures and reporting of program contributions to outcomes of interest to the public. |
| A Systemic Review of Studies Sponsored by the Department of Education to Evaluate Federal Education Programs: Lessons Learned Methodologically and Substantively |
| Margaret Cahalan, United States Department of Education, margaret.cahalan@ed.gov |
| Sharon Stout, United States Department of Education, sharon.stout@ed.gov |
| This paper reviews a set of 75 studies that were completed in the last decade by the Policy and Program Studies Service (PPSS) and by the Institute for Education Sciences (IES) to address evaluation questions concerning federal education programs. Most of these studies were either congressionally mandated or were undertaken to inform PART assessments. This paper adapts the systematic review methods developed by the Evidence for Policy and Practice Information and Coordinating Centre (EPPI-Centre) at the University of London (http://eppi.ioe.ac.uk/cms/). The systemic review will address: questions asked, design, statistical methods, metrics developed, study validity and measurement error issues, and findings. A major focus of the review will be the lessons learned for future work, both methodological and substantive. |
| Translating Research to Practice in Education |
| Jennifer Ballen Riccards, United States Department of Education, jennifer.ballen.riccards@ed.gov |
| How are research findings best conceptualized, translated, and communicated to a practitioner audience? Any good translation identifies core meaning and considers how the message will be both sent and received. "Doing What Works" (www.dww.ed.gov) is a multimedia website sponsored by the Department of Education dedicated to helping educators identify and make use of effective, research-based teaching practices. The Doing What Works approach uses three steps to accomplish this goal: 1) consolidating research findings into recommended teaching practices; 2) translating recommended teaching practices into multimedia examples of teaching and other information on a website; and 3) communicating to practitioner audiences (teachers and professional developers involved in K-12 education) using these resources. We will discuss issues in conceptualizing research findings, maintaining fidelity to the research, considerations of audience in translating research to practice, and developing effective vehicles - including innovative media types - for communicating key ideas and promising practices. |
| Lessons Learned From the Decade in Designing a New Generation of Evaluations of College Access and Retention Studies |
| Margaret Cahalan, United States Department of Education, margaret.cahalan@ed.gov |
| This paper takes an in-depth look at six federal education programs within the area of college access and retention, reviewing their PART scores and the evaluation studies upon which the scores were based. The programs all have had evaluation studies of varying degrees of rigor in addressing program impact: Student Support Services, Upward Bound, Talent Search, McNair, GEAR UP, and CCAMPIS. The work focuses on the interaction of evaluation, PART, budget development, congressional action, and ED policy development. The paper addresses lessons learned both methodologically and substantively. The paper describes a new standards-based model for program evaluation that emphasizes partnership, reflection, and innovation. |