2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Fidelity Checks and Interpretation in Producing Evaluation Value
Multipaper Session 903 to be held in Pacific C on Saturday, Nov 5, 12:35 PM to 2:05 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
M H Clark,  University of Central Florida, mhclark@mail.ucf.edu
Evaluators Use of Questionnaires and Score Interpretation
Presenter(s):
Randall Schumacker, University of Alabama, rschumacker@ua.edu
Abstract: Evaluators often use survey research methods to collect data on attitudes, perceptions, satisfaction, or opinions when evaluating persons or programs. Questionnaires use response scales for sets of items, however, most scales yield ordinal data, e.g. (SA, A, N, D, and SD), or have limited numerical range, e.g. 1 to 5 (Alreck & Settle, 2004). How variables are measured or scaled influences the type of statistical analyses we should conduct (Anderson, 1961; Stevens, 1946). Parametric statistics are typically used to analyze the survey data, however, without meeting the statistical assumptions. Rasch rating scale analysis of questionnaire responses can produce continuous measures from the ordered categorical responses. The Rasch rating scale analysis provides for the interpretation of the rating scale category effectiveness. Rasch person logits can be used in a linear transformation formula to produce scaled scores that range from 0 to 100, thus providing meaningful score interpretation and statistical analysis.
Values in Practical Evaluation: The Development and Validation of a Self-Reported Measure of Program Fidelity
Presenter(s):
Dennis Johnston, AVID Center, djohnston@avidcenter.org
Philip Nickel, AVID Center, pnickel@avidcenter.org
Abstract: Advancement Via Individual Determination (AVID) is a college preparatory system with the mission of preparing all students for college and career readiness. As AVID grew to over 4,000 schools, the need became apparent for a measure of program implementation fidelity. AVID staff developed the Certification Self-Study (CSS) measure to assist schools to implement AVID and to provide the AVID Center with information necessary to monitor the quality, consistency, and fidelity of AVID programs. This paper examines the values inherent in the process of developing this measure and the psychometric evaluation of the CSS. Results indicate that each subscales met sufficient levels of internal consistency and sites with higher levels of implementation fidelity evidenced higher outcomes. Discussion includes the value of using psychometrically validated measures, the educational values inherent in the AVID program, and how the method of CSS data collection reflects those values.
Using Measures of Implementation to Enhance the Interpretability of Experiments
Presenter(s):
Mark Hansen, University of California, Los Angeles, markhansen@ucla.edu
Abstract: The analyses presented here are based on a study of an innovative high school curriculum and related teacher professional development activities. The curriculum focused on improving students' reading comprehension. A variety of measures were used to assess the extent to which teachers implemented the prescribed curriculum. The extent to which students utilized various reading strategies emphasized within the curriculum was also examined. Treatment effects were estimated using multilevel models (students nested within classrooms). Following the approach of Hulleman and Cordray (2009), we calculated indices of treatment fidelity across the study groups for both teacher and student variables. Finally, we investigated how treatment strength may have affected inferences about effectiveness. There was evidence of a positive treatment effect on literacy instruction but no significant effects on student implementation (utilization of reading strategies) or student outcome variables. However, there appear to be positive relationships between student implementation variables and student outcomes.

 Return to Evaluation 2011

Add to Custom Program