|
Maturation Effects of Multi-cycle Initiatives: The Importance of Time Sensitive Indicators When Evaluating Impact
|
| Presenter(s):
|
| Lindsey Rosecrans, State University of New York at Albany, lr759168@albany.edu
|
| Kathy Gullie, State University of New York at Albany, kp9854@albany.edu
|
| Abstract:
The purpose of this paper is to present the importance of maturity and continuity of grant evaluation strategies and findings related to the impact of evaluating student achievement. The paper compares the outcomes of two initiatives taking into account the differences in program maturity levels. Using a drill down process, the projects assessed student-related outcomes via a series of data sources that ranged from statewide tests, classroom observations, and individual student classroom performance and student portfolios. Evaluation data were used to support a series of analyses that traced students across grades, teachers, and instructional modes. Analysis of student work and its relationship with the maturity and continuity of grant initiatives reflected the increasingly complex nature of evaluating student achievement. Techniques for considering maturation as a co varied variable also will be discussed.
|
|
Enhancing Evaluation Quality of Programs Serving Youth: Data Collection Strategies and Ethical Issues
|
| Presenter(s):
|
| Katherine Byrd, Claremont Graduate University, katherine.byrd@cgu.edu
|
| Tiffany Berry, Claremont Graduate University, tiffany.berry@cgu.edu
|
| Susan Menkes, Claremont Graduate University, susan.menkes@cgu.edu
|
| Krista Collins, Claremont Graduate University, krista.collins@cgu.edu
|
| Abstract:
Program evaluators face unique challenges collecting data with youth, yet including children in the evaluation process has been shown to enhance the quality of inquiry (Walker, 2007). Understanding developmentally appropriate strategies for collecting data among the youth population will not only enhance the quality of the data collected, but will also increase the sensitivity for detecting program effects. The purpose of this paper is three-fold. First, we will discuss the importance of considering age (i.e., chronological age, developmental level, etc.) and developmental domains (i.e., cognitive, social, physical, etc.) when evaluating programs serving children and youth. Second, quantitative and qualitative strategies for enhancing the quality of responses/data among children will be discussed, especially in relation to changes in age and domain across time. Third, we discuss how evaluation quality improves with respect to ethics when one considers the salient developmental issues pertinent to conducting evaluations with children.
|
|
Impacts of Comprehensive Teacher Induction: Results From the Second Year of a Randomized Controlled Study
|
| Presenter(s):
|
| Eric Isenberg, Mathematica Policy Research, eisenberg@mathematica-mpr.com
|
| Steven Glazerman, Mathematica Policy Research, sglazerman@mathematica-mpr.com
|
| Martha Bleeker, Mathematica Policy Research, mbleeker@mathematica-mpr.com
|
| Amy Johnson, Mathematica Policy Research, ajohnson@mathematica-mpr.com
|
| Julieta Lugo-Gil, Mathematica Policy Research, jlugo-gil@mathematica-mpr.com
|
| Mary Grider, Mathematica Policy Research, mgrider@mathematica-mpr.com
|
| Sarah Dolfin, Mathematica Policy Research, sdolfin@mathematica-mpr.com
|
| Edward Britton, WestEd, tbritto@wested.org
|
| Abstract:
This project evaluates the impacts of comprehensive teacher induction on teacher instructional practices, attitudes, retention, and student achievement. We use an experimental design in which 252 elementary schools with 561 beginning teachers in 10 school districts were randomly assigned to a treatment group receiving one year of comprehensive induction or a control group, and 166 elementary schools with 448 beginning teachers in 7 other school districts were randomly assigned to a treatment group receiving two years of comprehensive induction or a control group. Teachers in the control group took part in the induction programs provided by the district. We collected survey and administrative data for four years after random assignment in summer 2005, and conducted classroom observations during the spring of teachers’ first year. While treatment teachers received significantly more support than control teachers during the intervention, this did not translate into significant impacts on the outcomes after two years.
|
|
Using a Mixed Methods Design to Identify Exemplary College Access Centers in Texas High Schools
|
| Presenter(s):
|
| Jacqueline Stillisano, Texas A&M University, jstillisano@tamu.edu
|
| Hersh Waxman, Texas A&M University, hwaxman@tamu.edu
|
| Yuan-Hsuan Lee, Texas A&M University, jasviwl@neo.tamu.edu
|
| Kayla Braziel Rollins, Texas A&M University, kaylarollins@gmail.com
|
| Rhonda Goolsby, Texas A&M University, rhonda2000@tamu.edu
|
| Chyllis Scott, Texas A&M University, chyllisscott@neo.tamu.edu
|
| Abstract:
This paper reports on an evaluation study commissioned to examine the impact of college access centers, called GO Centers, established in Texas high schools with the goal of assisting students with college preparation activities and increasing college applications and college enrollment in the schools in which the Centers were implemented. Using a mixed-methods design, the evaluation team examined factors that contributed to a GO Center being successful at creating a college going culture and identified practices and program components that were prevalent across exemplary centers. Quantitative data were used to identify 30 GO Centers that demonstrated exemplary outcomes on at least one of several different variables, and 6 of these 30 sites were selected to participate in in-depth case studies designed to provide a comprehensive picture of each Center's specific experiences, challenges, and opportunities related to developing and implementing an exemplary college access program.
|
| | | |