|
Session Title: Improving the Quality of Evaluation Practice by Attending to Context
|
|
Panel Session 102 to be held in Lone Star A on Wednesday, Nov 10, 4:30 PM to 6:00 PM
|
|
Sponsored by the Presidential Strand
|
| Chair(s): |
| George Julnes, University of Baltimore, gjulnes@ubalt.edu
|
| Discussant(s):
|
| Eleanor Chelimsky, Independent Consultant, eleanor.chelimsky@gmail.com
|
| Abstract:
This panel, comprised of the three individuals involved in developing the 2009 AEA Conference Presidential Strand on context, will draw on publications developed from the strand to explore the ways in which attending to context can improve the quality of evaluation practice. Context has an influence on how we as evaluators approach and design our studies, how we carry them out, and how we report our findings. Using the five aspects of context covered by Rog in her Presidential address -- the nature of the problem being addressed, the context of the intervention being examined, the setting and broader environment in which the intervention is being studied, the parameters of the evaluation itself and the broader decision-making context—the panel will explore the ways in which attending to these areas and the dimensions within them (physical, organizational, social, cultural, tradition, and historical) may heighten the quality of evaluation practice.
|
|
Striving for Truth, Beauty, and Justice in Evaluation Practice: A Methodological Analysis of Contextually Sensitive Practice
|
| Debra Rog, Westat, debrarog@westat.com
|
|
This first presentation will reintroduce the contextual framework offered in the 2009 Presidential address. Using the framework, it will illustrate how attending to the five areas of the framework and the dimensions within each area can inform the development and implementation of study designs and methods that embrace the standards of quality identified by House (1980). In particular, drawing on the contributions to the 2009 Presidential strand as well as other examples, the presentation will highlight context-sensitive strategies that involve stakeholder participation and involvement, provide for rigor in outcome assessments, and improve the explanatory power of studies. The presentation will analyze the extent to which these strategies appeared to enhance study credibility, were perceived to be fair by stakeholders, and resulted in findings that were considered accurate and valid. Challenges in conducting contextually-sensitive evaluation practice while balancing quality standards will be discussed and recommendations offered.
|
|
|
Recognizing and Reconciling Differences in Stakeholders’ Contexts as a Prelude to High Quality Evaluation: The Cases of Cultural and Community Contexts
|
| Ross Conner, University of California, Irvine, rfconner@uci.edu
|
|
This presentation will focus on the importance of identifying differences in the contexts in which stakeholders are anchored, then reconciling these differences between and among the stakeholders in order to plan and implement high quality evaluation. Using examples from cultural and community contexts, the presenter will illustrate the types of differences among three important stakeholder subgroups: those participating in a project, those evaluating it, and those funding the project and evaluation. Drawing on their own context, people in each subgroup have viewpoints and assumptions about what constitutes a ‘successful’ project intervention and about what evidence can and should be produced to prove ‘success,’ or (to label this from the perspective of one of these subgroups, the evaluators) which designs, methods and measures are best. The lack of understanding and reconciliation of these different viewpoints and assumptions at the outset of an evaluation is one factor that hinders high quality evaluation.
| |
|
Comparative Evaluation Practice and Politics: The Role of Political Context in Influencing Quality
|
| Jody Fitzpatrick, University of Colorado, Denver, jody.fitzpatrick@ucdenver.edu
|
|
In political science and public administration, there is a strong tradition of comparative study or research. Specifically, researchers in these fields study how countries make different political or administrative choices in pursuing remedies to problems that government may address. In the United States, where evaluation has been grounded primarily in psychology and education, evaluators have not drawn from these political science traditions to study, in a systematic way, differences across countries that lead to differences in evaluation policies, practice, and, ultimately, evaluation quality. At AEA we do have a strong international presence and, in our publications, we occasionally learn of practices in other countries. This presentation, however, will draw from the comparative tradition of public administration and political science research to propose an agenda for evaluation research, one to identify political characteristics and contextual elements that may lead to different types, and qualities, of evaluation.
| |