|
Adopting a New End-Of-Course Evaluation: Enablers, Constraints, Challenges
|
| Presenter(s):
|
| Marcie Bober-Michel,
San Diego State University,
bober@mail.sdsu.edu
|
| Abstract:
For nearly three decades, San Diego State University/College of Education has used an internally-designed end-of-course evaluation. Although validity and reliability data were not routinely calculated, anecdotal evidence indicated the form lacked rigor. Students complained about its lack of relevance/substance as well as inconsistent implementation practices among faculty.
This presentation covers the author’s four-year effort to help the College move to a more statistically sound method for a) measuring student perceptions of course quality and b) determining how specific changes in policy, administrative oversight, and staffing might improve survey implementation, analytic reporting, and dissemination/use of results.
|
|
Using the Student Assessment of their Learning Gains for Course Assessment: Is it Feasible?
|
| Presenter(s):
|
| Tim Weston,
University of Colorado Boulder,
westont@colorado.edu
|
| Abstract:
The Student Assessment of their Learning Gains (SALG) is a flexible online assessment tool currently used by over a 1000 undergraduate instructors and over 60,000 students. The survey template allows students to identify course components (e.g. activities, materials, information) that “help them learn,” and make self-assessments of their own understandings and skills. The presenter will 1) provide an overview of the rationale behind the SALG and its use as a formative course design (versus teaching) instrument, and 2) present validity research on how the SALG has been used for both formative and summative assessment at the instructor and departmental levels. Findings include content analysis of 394 customized instruments, survey results from 138 instructors describing their use of the SALG for course improvement, and analysis of student responses to open-ended questions from 64 courses. Discussion will focus on the role of assessment of implementation factors in course activities.
|
|
Shaping Outcomes: Evaluating Instructor-Mediated Online Course Offerings in Outcomes-Based Planning and Evaluation
|
| Presenter(s):
|
| Howard Mzumara,
Indiana University-Purdue University Indianapolis,
hmzumara@iupui.edu
|
| Abstract:
Outcomes-Based Planning and Evaluation (OBPE), which includes development of a Logic Model, has emerged as one of the preferred approaches for evaluating the effectiveness and impact of an institution’s programs and services. This session will provide participants with a report based on a summative evaluation of an IMLS-funded project involved development and delivery of an instructor-mediated online course on outcomes-based planning and evaluation (also known as “Shaping Outcomes” www.shapingoutcomes.org/course) for university students and personnel in museum and library fields. The presentation will include an interactive discussion on the potential usefulness of outcomes measurement and mixed-method evaluation approaches as powerful tools for planning, evaluating, and improving educational programs and services in higher education settings.
|
|
Course Evaluation for Continuous Improvement: Qualitative Differences Between Online and Paper Administration of Student Course Evaluations
|
| Presenter(s):
|
| Nancy Rogers,
University of Cincinnati,
nancy.rogers@uc.edu
|
| Janice Noga,
Pathfinder Evaluation and Consulting,
jan.noga@stanfordalumni.org
|
| Abstract:
Among institutes of higher education, student evaluations are used as a primary source of information about quality of teaching and course delivery. However, concerns about the usefulness and value of these course evaluations for continuous improvement are plentiful. Current practice emphasizes the use of scaled questions to produce mean ratings for items. Unfortunately, ratings alone can be inadequate for informing continuous improvement efforts. With the introduction of online evaluation formats it is possible that technologically proficient students will contribute more useable feedback for improving course content and teaching effectiveness in space provided for student comments. By comparing the quality and depth of information provided in traditional paper versus online course evaluation formats, the presenters will describe an ongoing course evaluation pilot program designed to provide usable, continuous improvement feedback for faculty.
|
| | | |