|
Using Student Grades as an Outcome Measure in Program Evaluation: Issues of Validity
|
| Presenter(s):
|
| Kelci Price,
Chicago Public Schools,
kprice1@cps.k12.il.us
|
| Susan Ryan,
Chicago Public Schools,
sryan1@cps.k12.il.us
|
| Abstract:
As the demand for impact evaluation in education continues to grow it is important for evaluators to have valid measures for assessing program effectiveness. Although student grades are often used to assess the impact of educational interventions, there exists considerably ambiguity about the validity of this measure. This research explores the validity of grades with reference to three main issues: 1) the relationship of grades to student knowledge, 2) the sensitivity of grades to changes in student knowledge, and 3) whether there are systematic factors (student or school) which impact the relationship between grades and student knowledge. Implications of the findings for evaluators’ use of grades as a measure of program impact are discussed.
|
|
Rigor Versus Inclusion: Lessons Learned
|
| Presenter(s):
|
| Sheila A Arens,
Mid-Continent Research for Education and Learning,
sarens@mcrel.org
|
| Andrea Beesley,
Mid-Continent Research for Education and Learning,
abeesley@mcrel.org
|
| Laurie Moore,
Mid-continent Research for Education and Learning,
lmoore@mcrel.org
|
| Sandra Foster,
Mid-continent Research for Education and Learning,
sfoster@mcrel.org
|
| Jenna VanBerschot,
Mid-continent Research for Education and Learning,
jvanberschot@mcrel.org
|
| Robyn J Alsop,
Mid-continent Research for Education and Learning,
raslop@mcrel.org
|
| Abstract:
The rigor of school counseling research and evaluation has been questioned, leading to the National Panel for Evidence-Based School Counseling’s recommendations for improvement. In this paper, we describe our efforts to design an evaluation that would meet these quality standards. Following this, we provide details on the construction and use of data collection protocols for all participating high schools to examine implementation fidelity, and the process of constructing administrator, counselor, and student data collection instruments. In all phases of this work we have sought to employ an approach that relies on the inclusion of stakeholders—in particular, the program administrators and all site counselors. However, maintaining quality standards while taking an inclusive approach is challenging. We candidly present the obstacles encountered while balancing our evaluation design and an inclusive evaluation approach, as well as our rationale for remaining committed to this approach despite its challenges.
|
|
Evaluating a School-University Partnership Program: Evaluation for Program Development and Improvement in the Context of Outcome-based Accountability
|
| Presenter(s):
|
| Tysza Gandha,
University of Illinois Urbana-Champaign,
tgandha2@uiuc.edu
|
| Abstract:
School-university partnership (SUP) is viewed as a promising approach to educational reform and SUP programs continue to proliferate despite documented challenges. The potential and problems of SUP programs suggest that efforts to evaluate them can contribute substantially to educational improvements. This paper describes the first-year evaluation of a new SUP program charged with providing workplace-embedded professional development for K-12 educators. In light of the program’s embryonic stage, the first-year evaluation focused on engaging program staff in developing program theory and design, and addressed pressures to gather outcome data to demonstrate success. The commitment to responsiveness which resulted in an amorphic program, a weak evaluative culture in the organization, and external accountability demands which sought outcome data contributed to challenges in implementing the evaluation as planned. This paper will offer an analysis of the educational partnership context and discuss factors to consider in evaluating SUP programs.
|
|
Data-Informed Self-Evaluation: A Strategy to Improve Teaching
|
| Presenter(s):
|
| Wenhui Yuan,
Western Michigan University,
whyuan99@gmail.com
|
| Yun Shen,
Western Michigan University,
catherine.y.shen@gmail.com
|
| Abstract:
NCLB legislation makes teachers accountable for students’ performance, which brings challenges as well as opportunities for teachers to improve teaching. Confronting the reality, the authors propose to help teachers to gain instructional improvement using ‘teacher as evaluator’ strategy. By answering questions such as what data teachers could use for self-evaluation and how to use, the authors give out suggestions to deal with mountains of data produced by accountability movement. Barriers and necessary external support for teachers to implement data-informed self-evaluation will also be explored.
|
| | | |