|
Initial Results From “Beyond Evaluation Use”: A Study of
Involvement and Influence in Large, Multi-site National Science Foundation (NSF) Evaluations
|
| Presenter(s):
|
| Jean King,
University of Minnesota,
kingx004@umn.edu
|
| Lija Greenseid,
University of Minnesota,
gree0573@umn.edu
|
| Kelli Johnson,
University of Minnesota,
johns706@umn.edu
|
| Frances Lawrenz,
University of Minnesota,
lawrenz@umn.edu
|
| Stacie Toal,
University of Minnesota,
toal0002@umn.edu
|
| Boris Volkov,
University of Minnesota,
volk0057@umn.edu
|
| Abstract:
This paper presents preliminary findings of a three-year study of the use and influence of four NSF-funded program evaluations. It examines the relationship between stakeholder involvement and the long-term impact of the evaluations on project staff, the science, technology, engineering, and mathematics (STEM) community, and the evaluation community. The project answers three overarching research questions:
1. What patterns of involvement exist in large, multi-site evaluations?
2. To what extent do different levels of involvement in program evaluations result in different patterns of evaluation use and influence?
3. What evaluation practices are most directly related to enhancing the influence of evaluations?
Initial results suggest that people are involved in these large-scale program evaluations in a number of ways and that, not surprisingly, involvement can affect use. Given the diverse factors affecting the complex processes involved, our data suggest that the mechanisms promoting evaluation use and influence are far more difficult to pinpoint.
|
|
Case Studies of Evaluation Use and Influence in a School District
|
| Presenter(s):
|
| John Ehlert,
University of Minnesota,
jehlert@comcast.net
|
| Jean King,
University of Minnesota,
kingx004@umn.edu
|
| Abstract:
The study's purpose was to determine the ways people used the results of specific evaluations and how these evaluations influenced district practice over time. It focused on three evaluations selected because they were completed between 1999 and 2004, were participatory in nature, and had sufficient individuals remaining in the district to be interviewed, including a study of the implementation of state graduation standards, a study of the Special Education Department, and a study of middle school programming. Two primary methods were used: interviews with participants in three studies and document analysis of related meeting notes, reports, information from the district website, etc. The results document how future decisions included evaluation content and the processes that created structures for use. They demonstrate the extent to which external forces dramatically affected the use and influence of these evaluations, with implications for the concepts of evaluation use and influence more generally.
|
|
Process Use and Organizational Learning: A Different Perspective: The Case of the World Bank
|
| Presenter(s):
|
| Silvia Paruzzolo,
World Bank,
sparuzzolo@worldbank.org
|
| Giovanni Fattore,
Bocconi University,
giovanni.fattore@unibocconi.it
|
| Abstract:
Although interest in process use of evaluation, and organizational learning has grown substantially in recent years, studies that investigate non-evaluators perspectives on this issue are almost absent. Different authors maintain that evaluation appears to be most useful, especially as an organizational learning 'tool', when it is conducted using participatory approaches. What the present study wants to address is the following question: Would program practitioners involved in an evaluation of their program as primary stakeholders agree with this take? And why? The guiding idea of the present paper is that if evaluations are meant to be an organizational learning system, they need to be viewed as such by the primary users, i.e. the ones that initiate, and benefit from, the learning process. Using a mixed methods approach, the authors will explore this issue in the context of the World Bank, where interest in program evaluation is definitely gaining momentum.
|
|
Building Learning Communities With Evaluation Data Teams: A Collective Case Study of Six Alaskan School Districts
|
| Presenter(s):
|
| Edward McLain,
University of Alaska, Anchorage,
ed@uaa.alaska.edu
|
| Susan Tucker,
Evaluation and Development Association,
sutucker@sutucker.cnc.net
|
| Diane Hirshberg,
University of Alaska, Anchorage,
hirshberg@uaa.alaska.edu
|
| Alexandra Hill,
University of Alaska,
anarh1@uaa.alaska.edu
|
| Abstract:
Building the capacity of school-based "data teams" to use various improvement-oriented evaluation methodologies across diverse contexts has not been studied systematically. USDE's Title II Teacher Quality Enhancement (TQE) initiative is charged with enhancing teacher quality in high-need schools, which are experiencing a worsening crisis in attracting (and retaining) quality teachers. We discuss the development and growth of data teams in districts serving 60% of Alaska's students. Working with faculty from University of Alaska-Anchorage (UAA), data teams composed of teachers and principals form a professional learning community for restructuring and reculturing (Fullan, 2000; Sparks, 2005). Operating since spring 2005, these teams facilitate data-enhanced question framing, planning, and decision-making regarding student performance, instructional strategies, teacher retention, resource management, and systemic support grounded by geography of need and cultural responsiveness. We address learnings regarding the challenges of partnering, plateau effects and honoring diversity along with success strategies for data team development and institutionalization.
|
| | | |