Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Building Evaluation Capacity in Schools: Strategies for Fidelity
Multipaper Session 243 to be held in Room 107 in the Convention Center on Thursday, Nov 6, 9:15 AM to 10:45 AM
Sponsored by the Evaluation Use TIG , the Pre-K - 12 Educational Evaluation TIG, and the Organizational Learning and Evaluation Capacity Building TIG
Chair(s):
Jacqueline Stillisano,  Texas A&M University,  jstillisano@tamu.edu
Increasing the Utilization of Evaluation Findings Through the Disaggregation of Survey Data
Presenter(s):
Nicole Gerardi,  University of California Los Angeles,  gerardi_nicole@yahoo.com
Abstract: Evaluation reports are often based on aggregated data where analysis is restricted to descriptive statistics. Disaggregating the data can allow new relationships to appear that vary substantially from relationships based on aggregated data. It is important that evaluation findings and suggestions are specific enough, especially when conducting formative program evaluation. The author was part of a large evaluation conducted over three years (2003-2006), at a high school in Los Angeles Unified School District, geared at understanding the college going culture of the school. This paper explores how disaggregating evaluation survey data provides a much richer interpretation of particular evaluation findings. The increased use of evaluation findings by school personnel, once provided with higher quality suggestions, is addressed. The paper includes a brief discussion of college going culture literature, the processes of re-analyzing data and re-presenting findings, and what various stakeholders were able to do with the new evaluation findings.
Measuring Implementation Fidelity: Implications for Program Implementation and Evaluation
Presenter(s):
Rochelle Fritz,  Miami University,  rokuserm@muohio.edu
Paul Flaspohler,  Miami University,  flaspopd@muohio.edu
Abstract: This paper focuses on the development of and implications for a measure of implementation fidelity for an evidence-based violence prevention program. This paper discusses a process for developing an implementation fidelity measure that incorporates items assessing both program specific elements as well as general principals of effective prevention. In a review of the literature concerning implementation, Fixsen and colleagues (2005) noted that feedback loops are important in keeping evidence-based programs on track. Additionally, they found in the literature that measures of fidelity that were built into the functions of the program site may be more useful than measures conducted by outside researchers. The measure discussed in this paper is meant to be a self-assessment that can be used for program monitoring and program planning. Additional uses of the measure, such as linking program fidelity to program outcomes, are considered.
Consequences of Building Evaluation Capacity in Schools: A Case Study of Iteration in Team Development
Presenter(s):
Edward McLain,  University of Alaska Anchorage,  ed@uaa.alaska.edu
Susan Tucker,  E & D Associates LLC,  sutucker@sutucker.cnc.net
Letitia Fickel,  University of Alaska Anchorage,  aflcf@uaa.alaska.edu
Abstract: Building the capacity of school-based "data teams" to use improvement-oriented evaluation methodologies across diverse contexts has not been studied systematically. USDE’s Title II Teacher Quality Enhancement (TQE) is charged with enhancing teacher quality in the context of high-need schools. This paper shares the results of a professional development project with school-based data teams that have been operating since spring 2005 in collaboration with Alaska's TQE project. The goal of these teams is to facilitate staff and school decisions regarding strengthening diverse K-12 student performance, instructional strategies, resource management, systemic support mechanisms, and enhancing professional development. The paper discusses the development of a tool to describe the iterative nature of team functioning that has emerged from our grounded theory study with the teams. The tool’s utility is illustrated via a study of selected teams as context for the lessons learned in the development and initial application of the tool.
The After School Program Report Card: A Tool for Sharing and Utilizing Evaluation Findings
Presenter(s):
Joelle Greene,  National Community Renaissance,  jgreene@nationalcore.org
Susan Neufeld,  National Community Renaissance,  sneufeld@nationalcore.org
Yoon Choi,  National Community Renaissance,  ychoi@nationalcore.org
Abstract: The After School Program Report Card is a user-friendly tool that promotes the timely utilization of data for program decision-making. This report card simplifies data so that all staff -- from administrators to field staff – can identify and address program strengths and areas of improvement. We will present the tool, provide an overview of the tool’s development, and discuss the impact it had on program improvement and evaluation process. In addition we will share lessons learned and solicit input from session attendees to strengthen the tool. National Community Renaissance’s Department of Community Assessment and Program Evaluation serves as the internal evaluators for a collaborative of seven community-based after school providers provide 20 individual after school programs serving over 600 children and youth located on-site of affordable housing developments in Southern California.

 Return to Evaluation 2008

Add to Custom Program