2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Overcoming the Limitations of the Educational Context to Increase Rigor
Multipaper Session 839 to be held in PRESIDIO A on Saturday, Nov 13, 1:40 PM to 2:25 PM
Sponsored by the Independent Consulting TIG
Chair(s):
Kathleen Haynie,  Haynie Research and Evaluation, kchaynie@stanfordalumni.org
Using an Interrupted Time Series Design to Evaluate the Impact of a Professional Development Program for Teachers
Presenter(s):
Frederic Glantz, Kokopelli Associates LLC, fred@kokopelliassociates.com
Abstract: Evaluators typically use comparison groups of students in other schools or in other classrooms within the same school to evaluate the impact of interventions designed to improve student outcomes. Neither design is appropriate in the case of voluntary teacher professional development programs. First, it is almost impossible to find truly comparable schools. Second, in voluntary programs for teachers’, self-selection into the intervention results in biased impact estimates. An interrupted time series design avoids these problems by using participating teachers as their own comparison group. This paper discusses an evaluation of a professional development program designed to improve teaching skills in mathematics. The primary outcome measure is individual students’ placement on annual statewide standards-based assessments (SBA). The evaluation compared SBA scores of participating teachers’ students for several years prior to participation in the intervention to the SBA scores of the same teachers for several years following their participation in the intervention.
Collaborating With Clients to Develop Psychometric Parallel Teacher and Student Evaluation Measures
Presenter(s):
Kathryn Race, Race & Associates Ltd, race_associates@msn.com
Abstract: The benefits of using evaluation measures that have demonstrated reliability and validity are well documented in the evaluation and social science literature. Yet it can be quite challenging for evaluators working with small programs, especially given limited resources, to develop evaluation measures that are sensitive and relevant to local programs yet at the same time supported by demonstrated psychometric standards. The purpose of this presentation is to describe how as an external evaluator we worked cooperatively with a client to create such measures, one focused on teacher attitudes toward teaching science and the other focused on student assessment of teaching methods experienced in their science classrooms. Each measure was sensitive to the local program’s model and the reliability and validity of each measure was investigated. How we negotiated such issues as: use of shared resources; and contractual issues such as intellectual property; and lessons learned will be highlighted as well.

 Return to Evaluation 2010

Add to Custom Program