|
Session Title: A Conversation With Michael Patton About how Values Undergird the Assessment of Program Effects
|
|
Panel Session 852 to be held in California B on Saturday, Nov 5, 9:50 AM to 11:20 AM
|
|
Sponsored by the Qualitative Methods TIG
|
| Chair(s): |
| Charles Reichardt, University of Denver, creichar@du.edu
|
| Abstract:
The paradigmatic views and methodological values of a leading proponent of qualitative methods will be probed and challenged in a collegial conversation about assessing program effects. Although the questions that will be asked are meant to confront and prod, the purpose of the conversation is to understand rather than debate - to talk with, rather than past, each other. If qualitative and quantitative researchers are to resolve their longstanding animosities, they must come to understand each other's perspectives and values. The purpose of this panel is to provide a forum for a qualitative researcher to answer the pointed but respectful questions that quantitative researchers need to have answered if they are to understand and appreciate qualitative methods. In return, the answers will challenge and probe quantitative assumptions and perspectives. Ample time will be left for comments and questions from the audience.
|
|
Questions for Michael Patton About Assessing Program Effects: From a Quantitative Perspective
|
| Charles Reichardt, University of Denver, creichar@du.edu
|
|
I will pose a series of questions about how qualitative researchers assess program effects. My purpose is to understand the logic, practice, and values of qualitative research, as well as to probe for soft spots. My hope is that quantitative researchers in the audience will come to appreciate and trust qualitative methods by seeing how qualitative methods can withstand rigorous scrutiny from a quantitative perspective. I also hope that qualitative researchers in the audience will benefit in the following way. Over the years, qualitative researchers have challenged the use of quantitative methods. These challenges have helped quantitative researchers better understand their own methods and values, and the challenges have improved the practice of quantitative research. In a parallel fashion, a quantitative challenge to the use of qualitative methods might help enlighten qualitative researchers' understanding of their own methods and values, and thereby improve the practice of qualitative research.
|
|
|
Responses From Michael Patton About Assessing Program Effects: From a Qualitative and Complexity Perspective
|
| Michael Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net
|
|
Chip Reichardt posed some questions to me about my perspective on assessing program effects. While I believe the questions were intended to be open, inviting and neutral, the methodological values revealed by and embedded in the questions offer the opportunity for an in-depth exploration of how methodological values, assumptions, and paradigms influence the very evaluation questions we ask with significant consequences for how we engage in an evaluation inquiry and for evaluative thinking generally. This issue has been the subject of debates, expert lectures, and advocacy presentations. Instead of those formats, we're suggesting a conversation in which colleagues' panel presentations constitute questions generated by and left unanswered in my writings, especially with regard to issues of attribution and assessing program effects. I will respond through the lenses of systems thinking, complexity concepts, qualitative inquiry with a fundamental focus on evaluation use and consequences.
| |
|
Values Issues and Assumptions in Choosing Evaluation Questions and Evaluative Evidence
|
| Melvin Mark, Pennsylvania State University, m5m@psu.edu
|
|
Some evaluations are designed to estimate the effects of a program on a set of pre-identified outcomes. Keeping with the theme of this session, I will explore a set of values and assumptions related to choosing such an evaluation versus addressing a set of alternative questions that might be addressed in an evaluation. I will also briefly explore values issues and assumptions that are associated with alternative methods that might be used to "estimate program effects." The presentation is intended to try to illuminate aspects of the exchange between Reichardt and Patton in this session, as well as situate that exchange in relation to selected other contemporary conversations or disputes in evaluation.
| |