Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: To Agree or Disagree: The Measure and Use of Interrater Reliability
Demonstration Session 544 to be held in Panzacola Section H2 on Friday, Nov 13, 1:40 PM to 3:10 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Presenter(s):
Caroline Wiley, University of Arizona, crhummel@u.arizona.edu
Julius Najab, George Mason University, jnajab@gmu.edu
Simone Erchov, George Mason University, sfranz1@gmu.edu
Abstract: In this demonstration we will provide a conceptual overview of current and common interrater reliability procedures and discuss ways to apply them in practice. Additionally, we will discuss pertinent concerns surrounding rater training, such as diagnostic methods and obtaining reliability without sacrificing validity. Observational measurement serves an integral role in program evaluation. Due to the unique individual observer inferences, it is pertinent that every rater observe and interpret the same events similarly. This lack of similarity can greatly distort the interpretation of the effectiveness of the target intervention or performance being measured. We aim to help evaluators with a basic understanding of measurement theory and statistics make competent decisions regarding assessing interrater reliability. Understanding the various methods for assessing agreement or disagreement at the conceptual and practical levels helps reduce the influence of subjective judgment. Assessing interrater reliability of observational measures is vital for drawing accurate inferences.

 Return to Evaluation 2009

Add to Custom Program