|
Negotiating Program Evaluation in Collaborative Environments: Use of Best Practices & American Evaluation Association's Guiding Principles
|
| Presenter(s):
|
| Tosha Cantrell-Bruce, University of Illinois, Springfield, tcantrel@uis.edu
|
| Abstract:
Evaluators are increasingly finding themselves working in collaborative arrangements when evaluating programs. Further compounding this changing environment are the divergent priorities of stakeholders.
That was the situation for this new evaluator who was asked to evaluate a locally-funded technology program. This program aimed to place 100 computers into the homes of low-income children and assess whether the computers influenced recipients' academic achievement. A local nonprofit, funder, three schools, and the evaluator were involved in the program implementation and evaluation.
This case confirmed many of the 'real world' evaluation techniques suggested by other evaluators. These included negotiating realistic outcomes, expressing limitations to methodologies, identifying the priorities of the stakeholders and negotiating ownership of the data.
The evaluator also relied heavily on many of the AEA guiding principles to navigate this evaluation. Of particular interest were the principles on systematic inquiry, integrity and honesty, and respect for people.
|
|
When Values Collide: A New Evaluator's Experience Managing Program Resistance to Evaluation
|
| Presenter(s):
|
| Krista Schumacher, Oklahoma State University, krista.schumacher@okstate.edu
|
| Abstract:
This paper describes the experience of an external evaluator contracted to evaluate a multi-state federally funded program. The project's goal is to create new technology programs at approximately 60 institutions of higher education throughout several states. Main activities include the creation of new courses and faculty professional development. While the goal of evaluation initially seemed straightforward, it became clear that the project administration viewed evaluation unfavorably, which significantly compromised evaluation efforts. Despite requests of a national review committee for a strong evaluation and the efforts of the evaluator to respond to this request, the project director attempted to steer evaluation away from real assessment to a simple counting of institutions, faculty, courses, and students. This experience, which placed the evaluator in a position of compromising her values and ethics, offers important lessons for external evaluators in clearly establishing the values and expectations of project directors before entering into a contract.
|
|
Valuing an Iterative Process in Evaluation
|
| Presenter(s):
|
| Joanna Doran, University of California, Berkeley, joannad@berkeley.edu
|
| Chris Lee, University of California, Berkeley, clee07@berkeley.edu
|
| Abstract:
Viewing an evaluation measure as a static entity may prohibit an evaluator from capturing useful information that may emerge out of an iterative assessment process. This paper describes one example of incorporating an iterative process to help inform revision and provide clarity to a multi-site evaluation of curriculum infusion. A social work education program was implemented across twenty different professional social work schools, and an evaluation plan for assessing program implementation and impact was created. An initial survey was developed as a way to assess the degree to which learning competencies were being infused across school curricula. Results of this initial survey proved difficult to analyze and challenging to synthesize for reports. Accordingly, this experience led to the development of a new tool to assess the infusion of competencies. This paper describes the evolution of this evaluative tool and the iterative process taken to arrive there.
|
| | |