Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Using Qualitative Inquiry to Address Contextual Issues in Evaluation
Multipaper Session 571 to be held in Wekiwa 8 on Friday, Nov 13, 1:40 PM to 3:10 PM
Sponsored by the Qualitative Methods TIG
Chair(s):
Janet Usinger,  University of Nevada, Reno, usingerj@unr.edu
Discussant(s):
Sandra Mathison,  University of British Columbia, sandra.mathison@ubc.ca
Evaluation Context and Valuing Focus Group Evidence
Presenter(s):
Katherine Ryan, University of Illinois at Urbana-Champaign, k-ryan6@illinois.edu
Abstract: The notion of context and its varied meanings in evaluation are central to evaluation designs, methods, and data collection strategies, influencing what is 'possible, appropriate, and likely to produce actionable evidence' (AEA Call for Papers, 2009). In this paper, I examine three focus group approaches to determine how particular evaluation contexts influence the quality and credibility of the evidence gathered from these approaches. My examination includes a brief discussion of their respective theoretical foundations (epistemology, etc. ) and implementation (structure, setting, etc.). I present a case vignette for each, illustrating how these focus group approaches are utilized in evaluation. Notably, the value and the quality of evidence differs depending on such factors as the nature of evaluation questions, characteristics of the studied phenomena, evaluation constraints, etc. (Julnes & Rog, 2008). To study the relationship between these contextual factors and soundness of evidence from these focus group approaches, I draw on Guba & Lincoln's standards for judging the quality of qualitative data including these criteria: credibility, transferability, dependability, and confirmability.
Teaching/Learning Naturalistic Evaluation: Lived Experiences in an Authentic Learning Project
Presenter(s):
Melissa Freeman, University of Georgia, freeman9@uga.edu
Deborah Teitelbaum, University of Georgia, deb.teitelbaum@yahoo.com
Soria Colomer, University of Georgia, scolomer@uga.edu
Sharon Clark, University of Georgia, jbmb@uga.edu
Ann Duffy, University of Georgia, ann.duffy@glisi.org
San Joon Lee, University of Georgia, lsj0312@uga.edu
Dionne Poulton, University of Georgia, dpoulton@uga.edu
Abstract: This paper describes the inherent complexities of teaching and learning naturalistic evaluation using authentic learning projects. Using our lived experiences as teacher and student as we make sense of the lived experiences of stakeholders and, thus, the context and effect of the evaluand, we explore three points of juncture between doing and learning that characterize the essential features and challenges of naturalistic evaluation: 1) personal experiences as intentional relationships to programs, 2) letting go of prejudgments in order to grasp what is and what might be, and 3) working inductively as a group while avoiding the blind person's elephant. We draw on Saville Kushner's Personalizing Evaluation to contrast naturalistic evaluation approaches that emphasize stakeholders' emic perspectives with one that favors a critical interpretation of meaning as residing in experience notwithstanding stakeholders' perspectives. We conclude with an overview of the parallels between authentic learning and naturalistic evaluation.
To Mix or Not to Mix: The Role of Contextual Factors in Deciding Whether, When and How to Mix Qualitative and Quantitative Methods in an Evaluation Design
Presenter(s):
Susan Berkowitz, Westat, susanberkowitz@westat.com
Abstract: Drawing on 20+ years of experience in a contract research setting, this paper will extrapolate lessons learned about contextual factors most propitious to mixing qualitative and quantitative methods in a given evaluation design. It will discuss the role of different levels and types of contextual factors in informing the decision as to whether, when, and how to combine methods. These factors include: a) the setting of and audiences for the research; b) the funder's goals, objectives, expectations, and knowledge and understanding of methods; c) the fit between the qualitative and quantitative design components as reflected in the framing of the evaluation questions and underlying conceptual model for the study; and, d) the evaluators' skills, sensibilities, expertise and mutual tolerance, including shared 'ownership' of the design and the ability to explain the rationale for mixing methods to diverse, sometimes skeptical, audiences.

 Return to Evaluation 2009

Add to Custom Program