Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Course-Evaluation Designs II: Faculty Perspectives on Practices and Continuing Development
Multipaper Session 808 to be held in Centennial Section G on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Rick Axelson,  University of Iowa,  rick-axelson@uiowa.edu
Discussant(s):
Jennifer Reeves,  Nova Southeastern University,  jennreev@nova.edu
The Evaluation of an Adjunct Faculty Development Program at a Midwestern Private Non-Profit University
Presenter(s):
Jeannie Trudel,  Indiana Wesleyan University,  jeannie.trudel@indwes.edu
Ray Haynes,  Indiana University,  rkhaynes@indiana.edu
Abstract: This presentation discusses a completed evaluation of the Adjunct Faculty Development Program for Business and Management programs in the College of Adult and Professional Studies at a Midwestern private non-profit university. According to Fitzpatrick, Sanders and Worthen (2004), evaluations are conducted to judge the worth, merit and value of programs and products. The evaluation utilized Stufflebeam’s (2000) context, input, process, and product (CIPP) evaluation model to assess the adjunct faculty development program. The CIPP evaluation is based upon the basic principles of an open system (input, process, and output) and is capable of guiding decision making and addressing accountability (Fitzpatrick et al., 2004). The evaluation’s methodology and findings are presented and reconciled using the CIPP evaluation model checklist.
Evaluating a Doctoral Research Community in Online Education: Faculty and Independent Learner Interaction and Satisfaction
Presenter(s):
James Lenio,  Walden University,  jim.lenio@waldenu.edu
Sally Francis,  Walden University,  sally.francis@waldenu.edu
Nicole Holland,  Walden University,  nicole.holland@waldenu.edu
Iris Yob,  Walden University,  iris.yob@waldenu.edu
David Baur,  Walden University,  david.baur@waldenu.edu
Abstract: As the pressure to demonstrate student learning and success in higher education increases, the difficulty in evaluating doctoral education remains. The Research Forum at Walden and its accompanying assessment instruments represent an effort to provide systemic evaluation of this student population. The Research Forum, designed to support individual online doctoral student research, facilitates student communication with faculty mentors, promotes dialogue with other students, and provides access to materials specific to student research interests. The forum also allows faculty to be active mentors while enabling them to easily track mentee progress and performance. This paper examines how faculty utilized the Research Forum, communicated with their mentees, how helpful faculty and students perceived the Research Forum to be, and how student satisfaction has changed over time. Results of two online surveys, the Research Forum course evaluation for students and a Research Forum satisfaction/usage survey for faculty, will be presented.
Building Evaluation Capacity in Faculty through a Systematic Plan for Teaching Improvement
Presenter(s):
Meghan Kennedy,  Neumont University,  meghan.kennedy@neumont.edu
Abstract: Course evaluations are designed to provide meaningful information so instructors can improve their teaching and curriculum, but typically, this evaluation feedback is isolated and lacks connection to past or future courses. These evaluations are rarely a part of a systematic evaluation plan where faculty assess, improve, and follow up on identified areas. Faculty are simply passive receivers of feedback instead of active evaluators of their own course and teaching. Faculty must be trained to effectively evaluate their own teaching and curriculum. How can faculty take the information they receive and evaluate its soundness? What do they do with the data? How can they dig deeper and ask more questions? When do they make the changes and how do they communicate them? Training faculty to be evaluators in their own courses can change course evaluations from a punitive to an empowering experience.
Beliefs of Teachers About the Use and Efficacy of End of Semester Student Evaluation Surveys in Japanese Tertiary Education
Presenter(s):
Peter Burden,  Okayama Shoka University,  burden-p@po.osu.ac.jp
Abstract: For over five years, student evaluation of teaching through end of semester surveys (SETs) has been mandatory, hatched by bureaucracy and delivered to schools as an imperative, but often without clarification of aims or purposes. Little has been written questioning the introduction of evaluation in Japan and even less research has been channeled into gaining an understanding of the perspectives of teachers. A qualitative, case-study approach examines through in-depth interviews the perspectives of 22 English language teachers in Japanese tertiary education about the purpose and the use of this form of evaluation. Findings suggest that teachers feel ratings as not useful for either formative or summative purposes and are not informed in any way as to the purpose which leads to haphazard administration which affects consequential validity and teachers’ ability to improve citing threats to job security is threatened and a lack voice in decision making.

 Return to Evaluation 2008

Add to Custom Program