|
Session Title: Summer School Ain't So Bad, But Evaluating It Can Be: Lessons Learned From Outcome Evaluations of Summer Programs
|
|
Panel Session 830 to be held in Federal Hill Suite on Saturday, November 10, 1:50 PM to 3:20 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Elizabeth Cooper-Martin,
Montgomery County Public Schools,
elizabeth_cooper-martin@mcpsmd.org
|
| Discussant(s):
|
| Cindy Tananis,
University of Pittsburgh,
tananis@pitt.edu
|
| Abstract:
School districts commit significant effort and resources to summer programs. In the following panel, the presenters will share their experiences in evaluating a variety of such programs, including academic and arts programs for elementary students and remedial and enrichment courses for middle school students. Specifically, each panelist will reflect on a particular type of outcome that is useful for evaluating a summer program and present its advantages and challenges, plus lessons learned, based on using that outcome in their evaluations. As available, panelists will present evaluation design, data collection instruments, analytical methods, and results. Members will discuss the potential and limitations of the following approaches: course data and standardized test scores from the following academic year, stakeholder survey data, cumulative effects, and scores from pre-session and post-session tests. The panel's goal is to share lessons learned in the field as an invitation to discussion about outcome evaluations of summer programs.
|
|
The Use of Next Year's Course Enrollment, Test Scores, and Course Grades in an Evaluation of Summer Intervention and Enrichment Courses for Middle School Students
|
| Elizabeth Cooper-Martin,
Montgomery County Public Schools,
elizabeth_cooper-martin@mcpsmd.org
|
| Rachel Hickson,
Montgomery County Public Schools,
rhickson731@yahoo.com
|
|
Middle schools in Montgomery County Public Schools offered two types of summer courses. Focus on mathematics classes were designed to increase the number of students participating in advanced mathematics classes. Intervention courses, in both mathematics and English, were intended to help students achieve grade-level requirements in these subjects. The proposed outcomes of interest were end-of-course grades, standardized test results (both scores and passing rates), and, enrollment in above grade courses (for focus classes only). Although clearly important outcomes for the school system, these measures raised several issues: lag time between the course and the scores or grades, relevance of outcomes to the summer program content, and heterogeneity in courses taken by middle school students. Other lessons learned were related to identifying an appropriate comparison group of students and using of thresholds to measure student improvement.
|
|
|
The Use of Multiple Stakeholder Surveys in the Evaluation of Summer Programs for Elementary Students
|
| Nyambura Maina,
Montgomery County Public Schools,
susan_n_maina@mcpsmd.org
|
| Julie Wade,
Montgomery County Public Schools,
julie_wade@mcpsmd.org
|
|
In an effort to gain a better understanding of a program and its effects, we may examine it from different 'angles.' Our evaluation of two summer programs in Montgomery County Public Schools - Extended Learning Opportunities Summer Adventures in Learning (ELO SAIL) and 21st Century Community Learning Centers (21st CCLC) -uses survey data from multiple stakeholders. The ELO SAIL is a four-week program for students K-5 in Title I schools. The goal is to alleviate students' summer learning loss and to help schools maintain Adequate Yearly Progress. The 21st CCLC supports the ELO SAIL academic program by providing cultural arts and recreation activities. Ongoing evaluations of the programs employ a range of outcome measures, including surveys from administrators, teachers, artists, media specialists, recreation providers, parents, and students. This discussion will address effective administration of multiple stakeholder surveys, response rate, reliability, corroborating findings with other data sources, and consequential validity.
| |
|
Evaluation of Cumulative Effects of a Summer Elementary Education Program
|
| Scot McNary,
Montgomery County Public Schools,
scot_w_mcnary@mcpsmd.org
|
|
One type of summer educational program school districts implement is designed to prevent summer learning loss. Some programs allow students to return each summer. Student benefit from attendance should be detected in reduced summer learning loss. However, the cumulative effect of a summer program is more difficult to evaluate. Design decisions made during a recent evaluation of a summer elementary education program are discussed. Challenges include defining and measuring effects, particularly with respect to establishing good candidates for comparison to attendees, defining cumulative attendance, and selecting appropriate outcomes. Lessons learned pertain to improving future evaluation efforts, as follows: 1) rely on recent methodological advances in matching observational studies, 2) ensure outcome measures have sufficient validity for use, 3) construct a priori definitions of cumulative effect.
| |
|
Evaluating Outcomes of a Summer Learning Program Using Non-Randomized Comparison Group Pretest-Posttest Quasi-Experimental Design
|
| Helen Wang,
Montgomery County Public Schools,
helen_wang@mcpsmd.org
|
|
This summative evaluation employs a non-randomized comparison group pretest-posttest quasi-experimental design to examine academic benefits from the Extended Learning Opportunities Summer Adventures in Learning (ELO SAIL) program in Montgomery County Public Schools in Maryland. The summer program provides a four-week service to incoming Kindergarten through Grade 5 students from Title I schools, aimed at alleviating summer academic loss and promoting continued progress in learning. The present discussion addresses the strength of the selected evaluation design as a more scientifically rigorous and ethically practical approach for evaluating the summer program. Challenges and solutions involved in the evaluation, including the development of realistic evaluation questions, use of relevant outcome measures, and difficulty in assessment administrations are also discussed.
| |