2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Valuing our Methodological Diversity
Panel Session 951 to be held in Pacific A on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Presidential Strand
Chair(s):
Jennifer C Greene, University of Illinois at Urbana-Champaign, jcgreene@illinois.edu
Discussant(s):
Thomas Schwandt, University of Illinois at Urbana-Champaign, tschwand@illinois.edu
Abstract: Evaluation has many countenances in society, and this diversity of evaluation purpose and stance has accompanied us well for several decades. This diversity has supported the spread of evaluation into many sectors of society, swelled the ranks of evaluation associations around the globe, and attracted many different kinds of practitioners and scholars to the evaluation community. BUT, we are once again arguing about method. The argument this time is infused with multiple strands from the political arena, notably, new liberalism's calls for accountability, for results, and for credible evidence upon which to base policy decisions. But, we evaluators have also taken sides in this debate and have re-created rifts once healed. This session is envisioned to help begin to heal those rifts. Evaluators of all stripes are needed in today's complex and fast-paced world, and evaluators of all stripes are needed in our ongoing conversations and engagements, one with another.
Balancing Rigor, Relevance, and Reason: Fitting Methods to the Context
Debra Rog, Westat, debrarog@westat.com
Each evaluation situation is different. Each has its own set of questions and a distinctive multi-faceted context. Over the years, I have tried to be more cognizant of evaluation's context and to bring a mix of methods to addressing the questions. However, my training as an experimental social psychologist with special emphasis on ruling out alternative explanations has an undeniable influence on my work. In this presentation, I will address how I strive to conduct evaluations that are defensible (rigor), feasible within contextual limits and opportunities (reason), and attentive to multiple stakeholder concerns (relevance). I will further address how my evaluation practice often provides an opportunity - beyond the particular case - to learn more about a domain of intervention, a broader social problem, and/or a vulnerable population. I will describe how I have tried to maximize the evaluation to enhance this understanding and to provide findings of more general consequence.
Methods for Kaupapa Maori Evaluation
Fiona Cram, Katoa Ltd, fionac@katoa.net.nz
M-üori (Indigenous) evaluators, including myself, often live in two worlds: one foot in communities where local M-üori services strive to reduce disparities and facilitate wellness; the other foot in agencies that fund those services. Both worlds demand evaluation methods that credibly assess and understand success. Use of the 'Most Significant Change' method allows people from both worlds to tell their stories of change as well as contemplate what they value in terms of success. This creates opportunities for dialogue and shared understandings, and de-emphasizes my evaluator role as a translator between worlds. This method is compatible with Kaupapa M-üori (by M-üori, with M-üori) evaluation as it privileges M-üori cultural philosophies, including the importance of relationships, and takes for granted M-üori self-determination. Through the use of this method evaluation seizes an opportunity to contribute to the creation of a more inclusive society that honors and protects the rights of M-üori.
Valuing Different Ways of Knowing in Evaluating Programmes, Policy and Practice
Helen Simons, University of Southampton, h.simons@soton.ac.uk
My practice of evaluation has always been concerned with exploring the democratic function of evaluation to promote understanding of social and educational programmes. To this end I have used approaches, such as case study and narrative, which engage people in the generation of evaluation knowledge and have reported my work in accessible ways to those beyond the case. The aspiration has been to create opportunities for dialogue and development of policies and practice. Increasingly, I feel the need to broaden the countenance of my evaluation practice both to acknowledge evaluation as intrinsically a social, political practice and to extend and honour different ways of knowing. I will argue that using art forms in the evaluation process both broadens our perspective of how to 'value' programmes and policies and extends the democratic function of evaluation to create useful knowledge in fair and just ways.

 Return to Evaluation 2011

Add to Custom Program