2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Estimating Rater Consistency: Which Method Is Appropriate?
Demonstration Session 286 to be held in Lone Star E on Thursday, Nov 11, 1:40 PM to 3:10 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Robert Johnson, University of South Carolina, rjohnson@mailbox.sc.edu
Min Zhu, University of South Carolina, helen970114@gmail.com
Grant Morgan, University of South Carolina, praxisgm@aol.com
Vasanthi Rao, University of South Carolina, vasanthiji@yahoo.com
Abstract: When essays, portfolios, or other complex performance assessments are used in program evaluations, scoring the assessments require raters to make judgments about the quality of each examinee’s performance. Concerns about the objectivity of raters’ assignment of scores have contributed to the development of scoring rubrics, methods of rater training, and statistical methods for examining the consistency of raters’ scoring. Statistical methods for examining rater consistency include percent agreement and interrater reliability estimates (e.g., percent agreement, Spearman correlation, generalizability coefficient). This session describes each method, demonstrates its calculation, and describes when each is appropriate.

 Return to Evaluation 2010

Add to Custom Program