|
The Homogenization of Evaluation Policy
|
| Presenter(s):
|
| Katherine Ryan,
University of Illinois Urbana-Champaign,
k-ryan6@uiuc.edu
|
| Abstract:
The 2008 AEA Call for Papers acknowledges evaluation policy as an important influence on evaluation methods and practice. In this paper, I propose examining evaluation policy within the current milieu (physical or social setting in which something occurs). That is, to what extent are New Public Management (NPM) and other social changes leading to an implicit if not explicit homogenization of evaluation policy across a variety of domains such as health-care, education, environment, and international development? To address this question, I critically examine how the definitions of NPM concepts such as accountability and performance measurement have become entangled with the definition of evaluation. [NPM is a regulatory style that makes individuals and organizations accountable through auditable performance standards (Powers, 1997).] As part of the examination, I present case vignettes illustrating how evaluation in education (within domain) is being influenced and entangled at the federal and local levels in the United States, Europe, and Pacific Rim. These cases are then examined within a descriptive framework including evaluation focus (e.g., learning, accountability), key players (e.g., local), norms, key concepts, roles and responsibilities, and types of evaluations conducted. I close with a brief discussion about opportunities and challenges to influencing evaluation policy.
|
|
The American Evaluation Association Guiding Principles for Evaluation and the Government/Contractor Interface
|
| Presenter(s):
|
| Connie K Della-Piana,
National Science Foundation,
dellapiana@aol.com
|
| Gabriel Della-Piana,
Independent Consultant,
dellapiana@aol.com
|
| Abstract:
Analysis of the AEA Guiding Principles for Evaluation [hereafter, Principles] on “selecting key evaluation questions” reveals heavy demands for relational skills, technical reasoning, and normative (practical) reasoning. Implications are drawn for: the interface between government-as-client (funding/purchasing evaluations) and evaluator-as-contractor (responding to federal requests for evaluation proposals); the Principles; and research on evaluation. Analysis of the principles directly relevant to generating and selecting evaluation questions reveals a process that is a formidable assignment involving a complex, dynamic, and time-intensive process. The government contracting process will be updated beyond that of earlier critiques with emphasis on constraints and facilitators. The paper addresses the combined constraints and identifies ways to meet the joint demands of accountability as government prescribes and accountability as professional practice prescribes while anticipating changing government players and standards.
|
|
Using Integrative Evaluation Practices for Program Improvement
|
| Presenter(s):
|
| Celeste Sturdevant Reed,
Michigan State University,
csreed@msu.edu
|
| Beth Prince,
Michigan State University,
princeem@msu.edu
|
| Megan Platte,
Michigan State University,
plattmeg@msu.edu
|
| Laurie A Van Egeren,
Michigan State University,
vanegere@msu.edu
|
| Abstract:
This presentation illustrates the ways in which statewide evaluation practices and tools are being incorporated by a state agency for overall program improvement. Using a federally-funded, state-administered out-of-school time program as the example, we present the overall evaluation system that has been designed and discuss the network of inter-related policies and actions promoted by the state and by the evaluators. These mutually beneficial transformations include changes in such aspects as program policy, the requirements for gaining state funds, evaluation requirements, implementation factors (such as staffing and hours of operation), and staff attitudes. Collaboration among all partners -- the state agency, the evaluators, and the grantee service providers – has substantially reduced duplication in the collection of program and evaluation data. And, because we are proponents of continuous program improvement, we will discuss the challenges that remain for this partnership and its method of operating.
|
|
Policy Versus Practice: Does Anyone Win?
|
| Presenter(s):
|
| Candace Lacey,
Nova Southeastern University,
lacey@nova.edu
|
| John Enger,
Nova Southeastern University,
jenger@nova.edu
|
| Abstract:
This session focuses on the issue of policy versus practice as it relates to the development of a comprehensive evaluation plan for a four year, eight million dollar grant. Tough decisions related to policy versus practices raised the serious question of evaluation ethics and standards.
|
| | | |