Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Measuring Student Achievement: Issues and Implications for Educational Evaluation
Multipaper Session 111 to be held in the Granite Room Section A on Wednesday, Nov 5, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Susan Connors,  University of Colorado Denver,  susan.connors@cudenver.edu
Critical Dimensions of Authenticity in Assessment
Presenter(s):
Bruce B Frey,  University of Kansas,  bfrey@ku.edu
Vicki Schmitt,  University of Alabama,  vschmitt@bamaed.ua.edu
Abby Bowen,  University of Kansas,  aao1984@ku.edu
Abstract: A well accepted position among educational evaluators is that the best classroom assessments are authentic (e.g. Archbald & Newman, 1988; Bergen, 1993; Gronlund, 2003; Meyer, 1992; Newman, Brandt & Wiggins, 1998; Wiggins, 1989a, 1989b). The term best typically means valid and authentic is usually defined as having something to do with the real world. While most authors speak of authentic in the context of application outside the classroom, some do not and emphasize other aspects of assessments which determine their authenticity. We have completed an initial analysis of the most-cited scholars and advocates and offer a summary of those elements of authenticity which appear to have consensus as being critical. Somewhat surprisingly, those elements include components unrelated to the realism of the assessment.
Development of an Evaluation Rubric for Engineering and Science Skills Assessment Through Student-Maintained Portfolios
Presenter(s):
Rucha Londhe,  Goodman Research Group Inc,  londhe@grginc.com
Abstract: The proposed paper is based on a pilot study conducted to evaluate an NSF funded Innovative Technology Experiences for Students and Teachers (ITEST) project aimed at using a collaborative on-line environment to facilitate hands-on science and engineering activities for middle and high school students. Along with other goals, the project sought to foster engineering, programming, and work-force skills in student participants. To evaluate this project outcome, students were asked to maintain a portfolio throughout the two weeks of summer camp they attended. These portfolios served as the unit of analysis for an embedded assessment. A rubric was created to evaluate these portfolios. The proposed paper outlines the process of rubric development, explains the scoring based on the rubric, and provides examples from the actual portfolios. In addition, the paper assesses the use of portfolios in the context of broader methodology issues in embedded assessment.
The Impact of Benchmark Assessment on Student Achievement In Middle School Math: Two-Year Post-Implementation
Presenter(s):
Susan Henderson,  WestEd,  shender@wested.org
Anthony Petrosino,  WestEd,  apetros@wested.org
Sarah Guckenburg,  WestEd,  sgucken@wested.ord
Steve Hamilton,  WestEd,  shamilt@wested.org
Abstract: This study examines whether districts using quarterly benchmark exams in middle school mathematics show greater gains in student achievement than those not employing this practice. The study will examine differences in student achievement as measured by the Massachusetts State Comprehensive Assessment (MCAS) in schools using quarterly benchmark assessments aligned with Massachusetts Curriculum Frameworks Standards for mathematics in grades 8, two years post implementation. This study offers practical insight to evaluators in conducting scientifically based evaluations of educational programs through a quasi-experimental design that does not require randomization to treatment group prior to the implementation of an educational innovation. The results of this study have the potential to provide a solid research base to inform district and school level practices in the use of benchmark assessment to increase student achievement on state education standards.
Examination of Potential Growth Models Using Massachusetts Comprehensive Assessment System Mathematics Scores
Presenter(s):
Susan Henderson,  WestEd,  shender@wested.org
Natalie Lacireno-Paquet,  WestEd,  npaquet@wested.org
Craig Hoyle,  Education Development Center Inc,  choyle@edc.org
Abstract: This study conducted by the federally funded Regional Educational Laboratory of the Northeast and Islands evaluated three potential growth model approaches of student achievement using the MCAS Mathematics test. With the US Department of Education approving states to use growth models as part of school and district accountability under No Child Left Behind, this study used real data to gauge how Adequate Yearly Progress determinations would be different. For Massachusetts, there is interest in the possibility of using growth models as a policy tool to concentrate district attention on groups or individual students over time. Three approaches to growth models were applied to a cohort of students from 4th to 8th grade using MCAS math data. These approaches include a z-score approach, a multi-level modeling approach, and a vertically aligned achievement level approach (transition matrix approach). Results suggest that growth models will not produce a large boost in the number of students counting toward AYP over time but may be a useful tool in assisting districts with focusing limited resources on accelerating student achievement.

 Return to Evaluation 2008

Add to Custom Program