|
Session Title: Restoring Context to Evaluation Practice
|
|
Panel Session 264 to be held in Sebastian Section L4 on Thursday, Nov 12, 10:55 AM to 12:25 PM
|
|
Sponsored by the AEA Conference Committee
|
| Chair(s): |
| Zoe Barley, Mid-continent Research for Education and Learning, zbarley@mcrel.org
|
| Discussant(s):
|
| Ernest House, University of Colorado at Boulder, ernie.house@colorado.edu
|
| Abstract:
Evaluators have traditionally paid close attention to the many contexts relevant to conducting program/policy evaluations. The need to interpret results of an evaluation including contextual barriers and supports has been well understood. Over the last several years, in part as a result of increased attention to randomized designs, the focus on context has been de-emphasized under the assumption that randomizing renders contexts equal across groups. Without information about the influences of policies, persons, agencies and the physical settings that surround the evaluand, findings are impoverished and may not be responsive to the needs of the stakeholders. They are not likely to be appropriate or adequate for decisions or actions to be based on them. Three panel members will discuss issues surrounding the nature of the loss of context for evaluators, for practitioners, and for the broader public grounding them in experience. Audience discussion of loss and restoration will follow.
|
|
Culling the Wheat From the Chaff: No Context, No Meaning
|
| Mary Piontek, Columbia University, mp2800@mail.cumc.columbia.edu
|
|
A professional evaluator and an end-user of educational research and evaluation findings for almost 20 years, Dr. Piontek will discuss how users of educational research and evaluation findings must consider their institutional and organizational context in which policies, programs, and services are implemented and weigh their context against the "seemingly neutral" context stance presented in much of the recent educational research literature. Without grounding in the nuanced nature of the evaluand, practitioners must "cull" through findings from research and evaluation reports that may not only ignore sources of data, crucial stakeholders, and idiosyncratic implementation, but also fundamental assumptions of quality and merit embedded in the program contexts. Without careful attention to context, a practitioner might put undue emphasis on questions of efficiency or misinterpret impact when her/his institution adopts (or adapts) programs designed by (and evaluated in) other institutions.
|
|
|
Measuring the Trees but Missing the Forest: When Evaluators Fail to Include Context
|
| Andrea Beesley, Mid-continent Research for Education and Learning, abeesley@mcrel.org
|
|
An evaluator with experience in randomized-controlled trials, Dr. Beesley will discuss the risks to evaluators in overlooking context. When evaluators focus on a narrow set of measurable outcomes, they risk missing important program outcomes, the very outcomes that are most of interest to current and potential program participants. Ignoring context may lead evaluators to ask the wrong questions or to exclude crucial stakeholders. In reducing a program to what is most easily quantified, and by not observing or recording contextual features, evaluators may conclude that a program is less successful, or more successful, than it really is. This can lead to evaluators' making claims about the worth of a program that turn out to be incorrect if the program is enacted in a different context, and also failing to describe the original context with enough care that others can judge whether their own context is appropriate/comparable to the one studied.
| |
|
Will I Recognize the Forest When the Trees Are Cut Down? Contextualizing For Community
|
| Sheila Arens, Mid-continent Research for Education and Learning, sarens@mcrel.org
|
|
At the heart of democracy lies an informed and involved citizenry; evaluation is one means of informing the public toward facilitating decision-making processes. However, how evaluative information is collected and reported has implications for democratic engagement. Some may argue that the press toward increased rigor provides a common language and increased understanding of causal mechanisms (ostensibly a consequence of design). Yet, it isn't hard to imagine a situation free of certain kinds of evidence and free of "why" and "under what circumstances" questions. Evaluative inquiry practiced as context-free isolates stakeholders, their interest and involvement wanes as methodologically sophisticated inquiries dominate and findings are devoid of data to enable extrapolating findings to other programs or policies. Evaluators ought to remain cognizant of requirements imposed by legislation and funding agencies, but there seems a responsibility to ensure that information provide sufficient contextual detail for decision-making.
| |