Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Program-Level Evaluations in Higher Education: Expanding Practices and Policy
Multipaper Session 404 to be held in Mineral Hall Section A on Thursday, Nov 6, 4:30 PM to 6:00 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Howard Mzumara,  Indiana University-Purdue University Indianapolis,  hmzumara@iupui.edu
Discussant(s):
Stanley Varnhagen,  University of Alberta,  stanley.varnhagen@ualberta.ca
Outcomes Assessment Utilization in the Context of Teacher Education Program Review
Presenter(s):
Georgetta Myhlhousen-Leak,  University of Iowa,  leak@q.com
Abstract: This research investigated the factors and types of use present in teacher education outcomes assessment. A review of the literature suggests a significant lack of use. The absence of a definable structure for understanding use and its potential for desirable change has made the process of incorporating outcomes assessment into the processes of higher education a difficult, unclear and sometimes disjointed experience for many administrators and faculty. Use as identified and described in the evaluation utilization literature was applied to investigate how program administrators, faculty administrators, and faculty members who act as evaluators and/or intended users conceptualize use and how those conceptualizations are reflected in planning, implementation and reporting of outcomes assessment. This research provides evidence and insights into the nature of evaluative use and adds to the collective knowledge of use and how use occurs within the evaluation policy context of teacher preparation program review in higher education.
Quasi-experimental Outcome Evaluation of Undergraduate and a Graduate Entrepreneurship Education Programs
Presenter(s):
Elaine Rideout,  North Carolina State University,  ecrideout@econinvest.org
Denis Gray,  North Carolina State University,  denis_gray@ncsu.edu
Abstract: To the degree entrepreneurship education can increase the number and effectiveness of entrepreneurs, particularly for high technology ventures, economic growth may result. Unfortunately, very little empirical evidence exists that connects entrepreneurship education to subsequent economic growth. This paper will describe a quasi-experimental control group study of two long-running (10+ years) and respected university-based high-tech entrepreneurship education (E-ed) models – one for undergraduates and the other for graduate students. Data collection is currently underway and will identify concrete E-ed economic impacts and mediating factors (personal and environmental) associated with entrepreneurial propensities, activities and success. Impact on entrepreneurial intentions, enterprise activities, entrepreneurship activities (including start ups) will be reported. Results will also report evidence about critical E-ed mediators including personality characteristics, entrepreneurial orientation preferences and other variables. Implications for the design, operation, and evaluation of University entrepreneurship education programs will be discussed.
Thinking Differently About Student Assessment and Program Evaluation in a School of Education: A Self-Study and Blue Print for Change!
Presenter(s):
Patrick Lowenthal,  Regis University,  plowenth@regis.edu
John White,  Regis University,  jwhite@regis.edu
Karen Cooley,  Regis University,  kcooley@regis.edu
M Sue Davies,  Regis University,  mdavies@regis.edu
Abstract: The faculty in the School of Education at Regis University conducted a self-study as a part of the national accreditation process for the Teacher Education Accrediting Council (TEAC). Through this process, the faculty realized that they did not have enough evidence of student learning. This sparked a re-evaluation of how student assessment and program evaluation were conducted in the school. The faculty developed a new assessment and evaluation process that requires students to submit pre-identified artifacts from individual courses to an electronic portfolio. At certain stages (i.e., “gates”) throughout the program, students must submit their portfolio for faculty review in order to continue in the program. Data collected by “gate keepers” enables the faculty to assess student learning—both individually and collectively—but also to evaluate the program as a whole—whether that be at the instructor level, the course level, or the program level.
Evaluating Foreign Language Student Residences: Best Practices and Lessons Learned
Presenter(s):
Cary Johnson,  Brigham Young University,  cary_johnson@byu.edu
Wendy Baker,  Brigham Young University,  wendy_baker@byu.edu
Jennifer Bown,  Brigham Young University,  jennifer_bown@byu.edu
Rob Martinsen,  Brigham Young University,  rob.martinsen@byu.edu
Abstract: Foreign language student residences provide language learners with an opportunity to be immersed in their foreign language without the high cost associated with traveling and living in those countries. But how effective are these residences in helping learners acquire the language? The language residences selected for the evaluation were French, German, Russian, and Japanese. Qualitative methods of evaluation include video-taped dinner conversations and interviews with the participants. Quantitative methods include a language use survey, log of daily language use, ACTFL Oral Proficiency Interview scores, a computer-adaptive proficiency test, and a pronunciation test. This presentation will focus on the outcomes of the evaluation as well as the lessons learned in conducting language program evaluations.

 Return to Evaluation 2008

Add to Custom Program