|
Partial Respondents in Online Course Evaluations: Quantitative and Qualitative Analysis of Response Rate Patterns
|
| Presenter(s):
|
| David Nelson, Purdue University, davenelsond@gmail.com
|
| Abstract:
The advent of online student evaluations of course instruction has re-ignited debates about evaluation procedures and validity. Chief among these concerns is the nationwide decline in response rates for course evaluations conducting via an online medium. This paper examines patterns of student response rates in online evaluations from a large public research institution in the Midwest. It identifies several factors that may hinder student participation in voluntary course evaluations, and introduces a student demographic group that was heretofore absent from administrative analyses of student evaluation response rates.
Data analysis demonstrates marked self-selection among students who are now presented with multiple evaluations to complete at once, in contrast to the staggered structure of paper and pencil-based course evaluations. An anonymous survey of these 'partial respondents' provides some insight into the motivations of students and their choices in which surveys to complete.
|
|
What are Course Evaluations Evaluating?: Establishing the Validity of University Course Evaluations
|
| Presenter(s):
|
| Nancy Rogers, University of Cincinnati, nancy.rogers@uc.edu
|
| Jerry Jordan, University of Cincinnati, jerry.jordan@uc.edu
|
| Abstract:
While data from course evaluation forms are often used to make decisions about faculty and curriculum development, we seldom perform thorough validations of the course evaluation instruments themselves. The utility of these data can be obscured or diminished when questionnaire items are interpreted by students in ways not intended by evaluators constructing the survey. This research is centered on validating course evaluation instruments of undergraduate courses. First, two forms of data were collected to discern student perceptions of individual evaluation items. Students were asked in both questionnaire and interview formats what individual items meant to them and what factors drove their responses to those items. These student responses were compared with the intent of these items as articulated by the administrators/developers of the instruments. Since data collected through course evaluation instruments is often the foundation of curriculum reform, validation of instrument items is paramount to effective data-driven decision making.
|
| |