|
Examining Educational Evaluation Policy Alternatives
|
| Presenter(s):
|
| Katherine Ryan,
University of Illinois Urbana-Champaign,
k-ryan6@uiuc.edu
|
| Abstract:
Today, educational evaluation is playing a key role in the shift to a knowledge-based society (Ryan, 2005). As nations now vie for highly competitive positions within the global market place (Anderson, 2005), concerns about quality and the resources directed to education are increasing demands for information about school performance (Lundgren, 2003). This global preoccupation has lead to the heightened international and national emphasis on educational accountability. At the same time, educational accountability notions and the “machinery” of implementing accountability requirements varies across nations.
In this paper, I critically examine two distinct educational evaluation policies involving accountability: a) NCLB high stakes testing (performance measurement) and b) school self-evaluation and inspection (e.g, England, Netherlands, and Pacific Rim) (Nevo, 2002; Ryan, 2005). My examination includes a brief discussion of history of their respective roles in evaluation, foundations, and theories of action in improving student learning. I also analyze the evidence concerning the effects (e.g. are students learning more?) and potential consequences (e.g., fairness, narrowing of curriculum). I close by considering whether "importing" an international accountability policy--school-self evaluation and inspection might help with both school improvement and improved student learning in the United States.
|
|
Implementation Quality as an Essential Component in Efficacy Evaluations
|
| Presenter(s):
|
| Tiffany Berry,
Claremont Graduate University,
tiffany.berry@cgu.edu
|
| Rebecca Eddy,
Claremont Graduate University,
rebecca.eddy@cgu.edu
|
| Abstract:
In light of No Child Left Behind, the relative emphasis on outcomes rather than implementation data has been articulated as one of the potential consequences of the legislation on evaluation practice (Berry & Eddy, 2008). The implications this has for the future of educational research and evaluation are profound, especially if tracking and assessing implementation continues to be considered value-added for efficacy studies, particularly in the realm of educational evaluation. In fact, collecting implementation data in evaluations is as essential as a manipulation check is in experiments. The purpose of this paper is to discuss these implications, examine the benefits of collecting implementation data, and describe methods that increase the sophistication of measuring implementation data through quality ratings rather than frequency counts.
|
|
Fidelity of Implementation
|
| Presenter(s):
|
| Evie Chenhall,
Colorado State University,
evie@cahs.colostate.edu
|
| Can Xing,
Colorado State University,
can.xing@colostate.edu
|
| Abstract:
The new foreign language national standards have been introduced to the K-12 teachers in a school district in the southwestern United States. Instructional practices based on the standards are designed to help students reach proficiency in foreign language. As part of a three-year U.S. Department of Education grant, which began in 2006, the purpose of the study was to measure the depth of implementation of the new national standards in K-12 instruction. The evaluation component of the study was based on the Concerns-Based Adoption Model (CBAM) process referred to as the Levels of Use (LoU) analysis of fidelity of implementation. Interviews of foreign language teachers were conducted to evaluate teachers’ proficiency in using the new national standards. This presentation will include the implications of the study and examine the fidelity of implementation of the standards. Quantitative and qualitative data findings will be presented.
|
|
Effect of Professional Standards on Evaluation Practice
|
| Presenter(s):
|
| Teresa Brumfield,
University of North Carolina at Greensboro,
tebrumfi@uncg.edu
|
| Deborah Bartz,
University of North Carolina at Greensboro,
djbartz@uncg.edu
|
| Abstract:
To examine how evaluation policy influences evaluation practice, this presentation proposes to address how evaluations of educational projects/programs may be affected by existing professional standards that are separate and distinct from the Guiding Principles for Evaluators. The educational project of interest—Preparing Outstanding Science Teachers (POST)—is a collaboration of a high-need comprehensive LEA, a public southeastern university’s College of Arts and Sciences, and its School of Education. The purpose of this collaboration is to develop and provide middle school science teachers with standards-based professional development opportunities in both content and pedagogy, with a focus on increasing student achievement.
By examining how this project evaluation has been affected by professional development standards, national and state science standards, and test development standards, along with project evaluation guidelines, this presentation will emphasize why evaluators need to have more than basic familiarity with those professional standards that may affect their evaluations.
|
| | | |