Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: On the Outside Looking In: Lessons From the Field for External Evaluators
Multipaper Session 815 to be held in Mineral Hall Section F on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Faith Connolly,  Naviance,  faith.connolly@naviance.com
Improving the Usefulness of External Evaluations in Schools: An Analysis of Multiple Perspectives in Reading First Ohio
Presenter(s):
James Salzman,  Cleveland State University,  j.salzman@csuohio.edu
Sharon Brown,  Cleveland State University,  s.a.brown54@csuohio.edu
Tania Jarosewich,  Censeo Group LLC,  tania@censeogroup.com
Abstract: In this paper, we consider the plight of district personnel and external evaluators as they attempt to negotiate the external evaluation process to improve the usefulness of the results. Preliminary analyses of district evaluations suggested variability in the quality of evaluations that districts received. This study examined the evaluation process, including relationships between evaluators and district personnel, quality of evaluation data, usefulness of evaluation reports, and value of information provided by external evaluators of thirteen districts that participated in the Reading First program in Ohio. The evaluators used a mixed-methods approach to collect data from three different sources and two different methods: interviews with district leaders, interviews with evaluators, and an analysis of external evaluation reports.
Evaluating Federal Smaller Learning Community Program Grants: Lessons Learned in Urban Districts Across Six States
Presenter(s):
Miriam Pacheco Plaza,  University of Miami,  m.pacheco1@miami.edu
Adam Hall,  University of North Carolina Greensboro,  ahall@serve.org
Ann Bessell,  University of Miami,  agbessell@miami.edu
Abstract: This presentation will focus on lessons learned by evaluators in six states while conducting their external evaluations for large urban districts awarded Smaller Learning Communities (SLC) Program Grants from the U.S. Department of Education. Frustrations and challenges encountered by the majority of participating evaluators, along with proposed solutions for dealing with those challenges, are included. Data were gathered through focus groups, individual interviews, and questionnaires.
Processes and Strategies for Conducting School-Based Federal Evaluations
Presenter(s):
Janet Lee,  University of California Los Angeles,  janet.lee@ucla.edu
Anne Vo,  University of California Los Angeles,  annevo@ucla.edu
Minerva Avila,  University of California Los Angeles,  avila@gseis.ucla.edu
Abstract: Evaluations conducted in schools and school districts offer a unique setting in which to study evaluation practice. When working within a public school setting evaluators face obstacles to evaluation such as school and district bureaucracy, working within the limitations of other educational mandates, and personnel issues. In this presentation, we explore the challenges of conducting evaluations in schools and school districts and provide suggestions on how to address them. Particular emphasis is given to the role of the evaluator. The strategies discussed will be in light of a current evaluation being conducted of a federal grant awarded for the implementation of Small Learning Communities in a large, urban school district.
Enhancing Data Collection and Use: Specific and Practical Lessons Learned Working With K–12 Schools
Presenter(s):
Susan Saka,  University of Hawaii,  ssaka@hawaii.edu
Abstract: Schools are constantly receiving requests to participate in research studies, including evaluations. With NCLB and other issues, they are becoming less willing to do so. This affects the ability to gather accurate data that can used to affect practice, and ultimately, policy. Lessons learned from over 25 years of experience working with K–12 schools will be discussed, including how to a) get buy-in from the people who have the power to grant permission to conduct the study/evaluation, b) time specific aspects of a study including obtaining IRB approval, c) increase cooperation of teachers and students, d) budget for incentives and contingencies, e) anticipate things that may jeopardize data collection, f) reduce burden placed on school-level personnel, g) assist researcher/evaluator manage the tracking of data collection and other aspects of the study/evaluation, and h) how to turn results into actions that inform practice and ultimately, policy.

 Return to Evaluation 2008

Add to Custom Program