|
Session Title: Context and its Influences on Methodological Choices in Evaluation
|
|
Panel Session 217 to be held in Sebastian Section K on Thursday, Nov 12, 9:15 AM to 10:45 AM
|
|
Sponsored by the Presidential Strand
|
| Chair(s): |
| Jody Fitzpatrick, University of Colorado Denver, jody.fitzpatrick@ucdenver.edu
|
| Abstract:
Of course context influences methodological choices in evaluation. The nature of the program, its theories, its activities, its outcomes, and the evaluation questions to be addressed are the factors that determine the designs we develop, the sampling strategies we use, the constructs we measure, and the methods we use to collect information. But, context also influences evaluation practice in other, more subtle ways. A standard for good evaluation is use. In order for an evaluation to be used, evaluators must attend to elements of context such as the values, beliefs, and experiences of different stakeholder groups about evaluation, about data, about participation in the evaluation, about organizational learning, and about the nature of evidence when making, and negotiating, choices about methodology. These panelists will discuss how evaluators learn about context, attempt to work with it, and make choices about methodology that lead to evaluations that are both valid and influential.
|
|
Putting Context in Context Or, Even the Contingencies Are Contingent
|
| Melvin Mark, Pennsylvania State University, m5m@psu.edu
|
|
Context is a critically important construct. But it can also be a slippery term, partly because it applies to different points in evaluative processes. Several key points are described where evaluation and context intersect. For one, the context in which an evaluation is to be conducted, and the context for possible evaluation influence, should affect evaluation design. For another, aspects of the context in which a program operates may moderate program processes and outcomes - which can then affect evaluation design. The focus here is on how context affects evaluation design. A brief, selective review of evaluation theory identifies some suggestions about how context should affect evaluation design. Review of several past evaluations also illustrates how the program and utilization context affected design choices in practice. Finally, tentative suggestions are given about how evaluators might better deal with context in the future.
|
|
|
Context and Evaluation: Another Perspective on the Complex Relationship
|
| Leslie Goodyear, National Science Foundation, lgoodyea@nsf.gov
|
|
In her letter of invitation to this year's conference, President Deb Rog articulates a frame for thinking about "Context and Evaluation" and poses some important questions for the field. This framing places context as something external to the evaluation processa setting, a situation, a place, a history, or, for example, a broader context within which evaluations take place. In the Dictionary of Qualitative Inquiry, Schwandt considers a definition that, instead, sees context as being "produced in the social practice of asking questions," (2001, p.37). What contexts are created through evaluation? How can evaluators acknowledge and be thoughtful about the contexts they produce through the social practice of evaluation? Before your very eyes, Leslie Goodyear will wrestle with these challenging questions and share examples from her own experiences. Of particular interest are the ways in which context can influence the reporting of evaluation findings.
| |
|
Participatory Theory-driven Evaluations: Modeling Context, Complexity, and Boundary Conditions
|
| Stewart Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu
|
|
Stewart Donaldson will briefly outline the process of conducting participatory theory-driven evaluations, and provide a range of examples from contemporary evaluation practice of this approach in action. As these evaluations are presented, he will illustrate how context can influence theories of change, evaluation questions, evaluation designs, method choice, and perhaps most importantly evaluation findings and conclusions. A special emphasis will be placed on new strategies for modeling context, complexity, and boundary conditions in an effort to produce credible and actionable evidence. Emerging
tools and technologies for improving our ability to understanding context, complexity, and boundary conditions in evaluation practice will be discussed.
| |