2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Assessing the Use of Test Score Data to Inform Decisions About Student Achievement
Multipaper Session 845 to be held in BONHAM D on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Tara Pearsall,  Savannah College of Art and Design, tpearsal@scad.edu
Discussant(s):
Susan Henderson,  WestEd, shender@wested.org
Data Mining Electronically Linked Grade Three Standardized Assessment Scores From Kindergarten Assessments to Identify Performance Patterns
Presenter(s):
Deborah Carran, Johns Hopkins University, dtcarran@jhu.edu
Jacqueline Nunn, Johns Hopkins university, jnunn@jhu.edu
Tamara Otto, Johns Hopkins university, tamaraotto@jhu.edu
Abstract: Linkage of unique student identifiers across grade levels has generated renewed interest in predicting high stakes test scores at early ages. Data mining, an iterative process using large extant data warehouses to discover meaningful patterns in data, examined the relationship between kindergarten assessments and grade 3 high stakes reading and math assessments. 152,105 kindergarten students were identified as receiving a kindergarten assessment between 2002 and 2005. Of these students, 100,957 were matched with their Grade 3 standardized math score and 100,978 with their Grade 3 Reading score, representing a 66% match rate. Using Classification and Regression Tree modeling analysis results are presented in tree-like figures with branches representing the splitting of cases based on values of predictor attributes. Results indicated that the kindergarten assessment is a moderately successful predictor of later high stakes testing performance; math performance was predicted better than reading.
Using Student Test Scores to Evaluate Performance
Presenter(s):
Steven Glazerman, Mathematica Policy Research, sglazerman@mathematica-mpr.com
Liz Potamites, Mathematica Policy Research, lpotamites@mathematica-mpr.com
Abstract: There are many ways to use student test scores to evaluate the effectiveness of teachers or schools. This paper compares regression-based “value added” indicators to alternative estimators that are potentially simpler and cheaper. Such alternatives include those based on changes in average test scores for a given cohort in successive grades (average gains) and those based on changes in successive cohorts’ average scores in the same grade (cohort changes). We argue that while average gain indicators can potentially provide useful information, they have important limitations that must be taken into account. Cohort change indicators, however, are misleading and should be avoided.

 Return to Evaluation 2010

Add to Custom Program