|
Evaluator Contextual Responsiveness: A Simulation Study
|
| Presenter(s):
|
| Tarek Azzam,
University of California, Los Angeles,
tazzam@ucla.edu
|
| Abstract:
To test how evaluators responded to varying stakeholder interests, a simulation study was conducted in which evaluators had the opportunity to modify their evaluation design in response to varying stakeholder perspectives. The purpose of this study was to test how the political context and evaluator characteristics affected evaluation design decisions. By systematically varying stakeholder opinions of the evaluation, it was possible to examine how evaluators reshaped their design to fit different political contexts. The study's results revealed that evaluators were more responsive to stakeholders who held more logistical control over the evaluation (i.e. funding, data access). In these conditions evaluators were willing to modify more evaluation design elements than they did for stakeholders with less logistical control over the evaluation. Additionally, findings suggested that evaluator's methodological and utilization preferences strongly influenced their evaluation design decisions.
|
|
What's Hot and What's Not? Sifting Through Six Years and Three Journals Worth of Evaluation Theory and Research
|
| Presenter(s):
|
| Bernadette Campbell,
Carleton University,
bernadette_campbell@carleton.ca
|
| Deborah Reid,
Carleton University,
debbie.reid@sympatico.ca
|
| Abstract:
There has been a push for more empirical research on program evaluation to ground evaluation theory in a base of evidence. With such a broad charge, however, it is difficult to know where precisely to begin. Indeed, evaluation theory covers a lot of territory. In the present study, we use a sample of the evaluation literature as a starting point. We present the results of a systematic review and content analysis of over 500 abstracts, representing 6 years (2000-2006) of published research in three well-known evaluation journals (American Journal of Evaluation, Canadian Journal of Program Evaluation, Evaluation). Among other dimensions, abstracts were coded according to Shadish, Cook & Leviton's (1991) five dimensions of evaluation theory (social programming, valuing, knowledge use, knowledge construction, and evaluation practice). It is hoped that the results of this review will (a) paint a picture of current theoretical debates and discussions in the field, and (b) provide a starting point for establishing priorities for the empirical program of evaluation (Shadish et al., 1991).
|
| |