| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
|
| Roundtable Rotation I:
Non-response Bias a Limitation: Practical Perspectives of Evaluation Quality Using Survey and Questionnaire Data |
|
Roundtable Presentation 114 to be held in SAN JACINTO on Wednesday, Nov 10, 4:30 PM to 6:00 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Presenter(s):
|
| Michelle Bakerson, Indiana University South Bend, mmbakerson@yahoo.com
|
| Abstract:
Evaluation quality from a practical stand point depends on the quality of the data gathered. Surveys and questionnaires are commonly used tools to gather data, however this type of data gathering comes with certain limitations and biases. One major limitation to this type of data is non-response bias, which exists when there is a difference in the interpretation of results that would be made regarding those who respond and those who do not respond. The bias created by non-response is a function of both the level of non-response and the extent to which non-respondents are different from respondents (Kano, Franke, Afifi & Bourque, 2008). An explanation of what occurs within survey and questionnaire data is examined using detailed alternatives of interpretation taking non-response into account by making sure the data is valid and does not contain non-response bias. Taking this extra step when examining data will help ensure quality in evaluation findings.
|
| Roundtable Rotation II:
Coding Open-Ended Survey Items: A Discussion of Codebook Development and Coding Procedures
|
|
Roundtable Presentation 114 to be held in SAN JACINTO on Wednesday, Nov 10, 4:30 PM to 6:00 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Presenter(s):
|
| Heather Bennett, University of South Carolina, bennethl@mailbox.sc.edu
|
| Joanna Gilmore, University of South Carolina, jagilmor@mailbox.sc.edu
|
| Grant Morgan, University of South Carolina, morgang@mailbox.sc.edu
|
| Abstract:
Responses to open-ended items are generally analyzed inductively through the examination of themes. Unfortunately, key decisions in this process (such as how to segment open-ended responses and the number of codes to include in a codebook) are often smoothed over in published research articles (Draugalis, Coons, & Plaza, 1998; Lupia, 2008). To address this call for greater transparency, this round-table presentation will provide information about the decision-making process researchers from the Office of Program Evaluation (OPE) used to code open-ended items. OPE researchers will also share lessons learned in how to facilitate the coding of open-ended items among a team of researchers and ways to present findings to clients. This round-table will be useful for introducing coding procedures to novice qualitative researchers. Additionally, researchers will encourage a discussion among advanced researchers concerning key decisions in analyzing and reporting data from open-ended survey items.
|