|
A Case Study of Involvement and Influence: Multi-year Cross-site Core Evaluation of the National Science Foundation's Collaboratives for Excellence in Teacher Preparation Program
|
| Presenter(s):
|
| Kelli Johnson,
University of Minnesota,
johns706@umn.edu
|
| Frances Lawrenz,
University of Minnesota,
lawrenz@umn.edu
|
| Lija Greenseid,
University of Minnesota,
gree0573@umn.edu
|
| Abstract:
This paper presents a case study of the program-level evaluation of the National Science Foundation's Collaboratives for Excellence in Teacher Preparation (CETP) Program. The CETP program was designed to improve mathematics and science teacher preparation through the improvement of undergraduate science and mathematics courses and education courses. Using data collected from extensive document review, key informant interviews and a survey, this case study describes the involvement in the overall program evaluation by various stakeholders across 25 CETP project sites nationwide. It highlights issues related to voluntary multi-year, cross-site participation by multiple projects whose willingness to collaborate on instrument development, to adhere to standard protocols, and to share data with the core evaluation team varied widely. This case study informs an exploration of relationships between evaluation activities, especially involvement, and evaluation influence and the pathways that result in greater evaluation use and influence.
|
|
Multiplicity in Action: Creating and Implementing a Multi-program, Multi-site Evaluation Plan for a Predominantly Minority/Urban School District
|
| Presenter(s):
|
| Mehmet Dali Öztürk,
Arizona State University,
ozturk@asu.edu
|
| Abstract:
Multiplicity can often mean complexity in designing sound and reliable evaluation plans for educational partnerships. However, multi-program, multisite evaluations that are carefully designed/planned and conducted can be very effective in understanding the effects of educational innovations, interventions, and reforms in diverse settings, thus contributing to systemic change efforts. This paper examines a multi-component evaluation plan, which focuses on identifying which program factors, if any, are related to improved outcomes for students, schools, and families. Specifically, the paper shares experiences in creating and implementing a multi-program, multi-site evaluation plan with an ongoing university-school partnership initiative that is designed to achieve the common goal of making a difference in the educational improvement of students in a culturally, linguistically, and economically diverse region of the United States.
|
|
Lessons Learned From Rating the Progress and Extent of Reform
|
| Presenter(s):
|
| Patricia K Freitag,
COSMOS Corporation,
patfreitag@comcast.net
|
| Darnella Davis,
COSMOS Corporation,
ddavis@cosmoscorp.com
|
| Abstract:
Innovative evaluation frameworks are needed to understand the progress and extent of complex education reforms and their relationship to research-based components of systemic reform. The lessons learned from measuring and rating reform components and progress are at the heart of this paper. Assertions regarding the relative progress and impact of funded projects will be made within the context of well-developed case studies of comprehensive school reform. Recommendations for adapting reform progress measures for broader use in evaluation are derived from pilot test findings. Ratings reveal shifting priorities and constraints that may be correlated with system alignment, project maturity, as well as incremental changes in student performance. The paper discusses how clarifying the underlying theory of action is helpful for revising measures of progress, and may improve rating and ranking reliability between multiple raters of reform.
|
|
Value-added Assessment: Teacher Training Designed to Improve Student Achievement
|
| Presenter(s):
|
| Laurie Ruberg,
Wheeling Jesuit University,
lruberg@cet.edu
|
| Judy Martin,
Wheeling Jesuit University,
jmartin@cet.edu
|
| Karen Chen,
Wheeling Jesuit University,
kchen@cet.edu
|
| Abstract:
Recent studies question the belief that family and socio-economic backgrounds have a strong influence on student learning compared with teachers and schools that have only a limited effect. Current research shows that students can learn a lot from and are greatly influenced by an effective teacher. This report examines the outcomes of a three-year, multi-site, multi-level professional development program for teachers situated at schools serving low socio-economic and ethnically diverse populations. Program interventions are designed to provide professional development strategies aimed ultimately at increasing student achievement in science.
The evaluation combines qualitative and quantitative research methods. Being the third year of program implementation, this analysis builds upon prior evaluations addressing organizational and service utilization plans and focuses on program impact. The professional development guidelines used to design the interventions are applied to the data analysis to assess whether the program had the desired effect on teachers and students.
|
|
Using Threshold Analysis to Develop a Typology of Programs: Lessons Learned from the National Evaluation of Communities In Schools (CIS)
|
| Presenter(s):
|
| Allan Porowski,
Caliber an ICF International Company,
aporowski@icfi.com
|
| Stella Munya,
Caliber an ICF International Company,
smunya@icfi.com
|
| Felix Fernandez,
Caliber an ICF International Company,
ffernandez@icfi.com
|
| Susan Siegel,
Communities in Schools,
siegels@cisnet.org
|
| Abstract:
One of the principal challenges of cross-site evaluations is making sense of the variability across programs. In this presentation, we present the case of a particularly challenging cross-site evaluation of Communities in Schools, a program with more than 2,500 sites across the country – each delivering highly tailored services. To address this challenge and to bring a coherent framework to a highly diverse program, the authors developed a typology of programs using threshold analysis, a scoring method that brings together quantitative and qualitative data to address both performance measurement and adherence to the ideal program model. Our threshold analysis resulted in a typology that captured the essence of each site's program without sacrificing the flexibility of the program model. In this presentation, we will describe our methodology as well as implications for interpreting the results of the typology analysis.
|
| | | | |