|
A 360 Evaluation of Service Learning Programs
|
| Presenter(s):
|
| Guili Zhang, East Carolina University, zhangg@ecu.edu
|
| Abstract:
Service-learning programs often involve multiple stake holders and generate intended main effects as well as unanticipated spillover impacts. While evaluations aiming at a single aspect of impact and focusing on a single stage of a service-learning program can be informative and valuable in answering isolated questions, they often fail to provide a complete picture to fully capture the multifaceted effects that a service-learning program can generate in the process and at the end. This proposal describes an effective 360° evaluation of a service-learning tutoring program in teacher education at a research university by following Stufflebeam's context, input, process, product (CIPP) model. The process and advantages of the 360° evaluation during the context, input, process, product evaluation is described and discussed. The 360° evaluation using the CIPP model is systematic and can help researchers strive toward a more holistic appraisal of service-learning programs.
|
|
Using Participatory Evaluation for Program-level Assessment of Student Learning in Higher Education
|
| Presenter(s):
|
| Monica Stitt-Bergh, University of Hawaii, Manoa, bergh@hawaii.edu
|
| Abstract:
Regional accreditation agencies require that higher education institutions conduct student-learning assessment, and the U.S. Department of Education is pushing for more transparent accountability. Using the University of Hawai'i at Mānoa (UHM) as an example, I describe how a practical participatory evaluation (P-PE) approach can meet accreditation demands for program-level assessment of student learning and hold the institution responsible for student learning. I explain the factors related to UHM's organizational culture and values that made P-PE an appropriate evaluation approach. This presentation is aimed at those interested in program assessment of student learning at a research university or in factors contributing to P-PE success. Session attendees will leave knowing factors that led to positive reception of P-PE as an evaluation approach; strategies to grow P-PE in an organization; and how P-PE results can be used to meet regional accreditation requirements.
|
|
Evaluators and Institution Researchers Working Together to Understand Student Success in Learning Communities
|
| Presenter(s):
|
| Amelia E Maynard, University of Minnesota, mayn0065@umn.edu
|
| Sally Francis, University of Minnesota, fran0465@umn.edu
|
| Abstract:
Currently, institutional research (IR) offices in community colleges nationwide are collecting and reporting on extensive data sets. Colleges are especially focused on studying student retention. This paper discusses how external evaluators can work with IR to provide a deeper understanding of program successes and challenges in improving retention. We will present a case study of an evaluation of learning communities (LC) in two community colleges. First, we provide the context of the cases and describe what data the colleges were already collecting. Then, we discuss the evaluation study we designed, which included a retrospective analysis of student level data and student and faculty interviews. The presentation focuses on how the evaluation contributed to a more comprehensive understanding of how LCs affect student success. Lastly, we will discuss how the colleges have applied these findings and some challenges we faced as external evaluators working with the colleges' IR offices.
|
|
Talking About Assessment: An Analysis of the Measuring Quality Blog and the Comments it Elicited
|
| Presenter(s):
|
| Gloria Jea, University of Illinois at Urbana-Champaign, gjea2@illinois.edu
|
| Abstract:
Assessing student learning outcomes has become an important part of accreditation and the discussions about quality of higher education (Ewell, 2009; Kuh & Ikenberry, 2009). Interested in the conversations around learning outcomes assessment in higher education, this research discusses the values that faculty, institutional researchers, professionals, and other observers carry. This research is done through a qualitative content analysis on a special blog series, Measuring Stick that The Chronicle of Higher Education ran during the fall of 2010. The blog explored debates about quality in higher education, answering two main questions: 'How should quality in higher education be measured and are higher education's ostensible quality-control mechanisms functioning well?' By analyzing the blog postings and reader comments this paper proposes blogs as a source of data, discusses the value of these comments, and questions the potential arena in blogs for constructive conversations about student learning outcomes assessment.
|
| | | |