Return to search form  

Session Title: Evaluations of Reading and Literacy Programs
Multipaper Session 680 to be held in Fairmont Suite on Friday, November 9, 4:30 PM to 6:00 PM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Edith Stevens,  Macro International Inc,  edith.s.stevens@orcmacro.com
Comparing Self-report Logs with Classroom Observation of Reading Instruction
Presenter(s):
David Quinn,  Chicago Public Schools,  dwquinn@cps.k12.il.us
Kelci Price,  Chicago Public Schools,  kprice1@cps.k12.il.us
Annette Marek,  Chicago Public Schools,  annettemarek@gmail.com
Alvin Quinones,  Chicago Public Schools,  agquinon@uchicago.edu
Mangi Arugam,  Chicago Public Schools,  marugam@cps.k12.il.us
Abstract: The purpose for this evaluation was to assess the implementation and use of a district created guidebook of reading instruction practices. Classroom observations were conducted over three months in Kindergarten through twelfth grade classrooms. A total of 70 classrooms were observed. Classrooms were observed twice for a total of 138 observations. Each class was observed for an entire class period, approximately 45 minutes to one hour. A second sample of teachers were recruited from elementary and high schools across the district to complete self-report reading instruction logs for their classes. Each teacher was asked to complete up to sixty logs, rotating through a sample of students in their classes. The teacher logs focused on the same reading instruction topics as the observation protocol. Results for the observations were compared to findings from the self-report logs. Similarities and differences in findings were noted.
Criteria, Interferences, and Flexibility: Issues From a School District Evaluation
Presenter(s):
Linda Mabry,  Washington State University, Vancouver,  mabryl@vancouver.wsu.edu
Abstract: In this paper, I propose to report on a four-year evaluation of a project to improve student literacy in four large high schools in one school district. With federal funding, the district implemented a Smaller Learning Communities (SLC) program which involved new or revised reading curricula and professional development in literacy instruction for teachers of literacy/English classes and also teachers of non-literacy related classes. The schools also implemented grade-level advisories intended as supportive communities to which students would belong throughout their high school careers, peer mentoring of freshmen by upperclassmen to ease the transition to high school, and portfolios for showcasing students work, sharing it and plans with parents, and facilitating productive transition to postsecondary life. State test scores showed improved literacy outcomes. Communities were developed in reading classes for struggling and advanced students, but not students working at grade level or in most advisories, especially those for upperclassmen.
Measuring the Fidelity of Literacy Programs: No Shortcuts
Presenter(s):
Nancy Carrillo,  Albuquerque Public Schools,  carrillo_n@aps.edu
Abstract: A large urban school district requested an evaluation of the many reading programs implemented among its elementary schools. Previous research of school-level data had suggested that the type of reading program did not influence assessment outcomes; but these results were not well accepted and a more thorough evaluation was requested. Stakeholders did not see evaluators as being unbiased and some believed that program fidelity was a key factor that had not been considered. I developed a committee of stakeholders with disparate opinions to assist in designing the main data collection tool. This evaluation changed the unit of analysis from school to student and included measures of program fidelity. Results are quite similar to those found in the past – there are few differences between programs when student and school measures are included; nor were fidelity measures found to impact outcomes.
Search Form