Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Exploring Methods and Tools in Higher Education Evaluation
Multipaper Session 627 to be held in Panzacola Section H3 on Friday, Nov 13, 4:30 PM to 6:00 PM
Sponsored by the Assessment in Higher Education TIG
Chair(s):
Stanley Varnhagen,  University of Alberta, stanley.varnhagen@ualberta.ca
Discussant(s):
Stanley Varnhagen,  University of Alberta, stanley.varnhagen@ualberta.ca
The Use of Confirmatory Factor Analysis to Support the Use of Subscores for a Large-Scale Student Outcomes Assessment
Presenter(s):
Rochelle Michel, Educational Testing Service, rmichel@ets.org
Towanda Sullivan, Educational Testing Service, tsullivan@ets.org
Abstract: This paper presentation will use confirmatory factor analysis to provide validity evidence to support the use of the various skill- and context-based subscores that are currently reported for a large-scale student outcomes assessment. The assessment is designed to provide data to accomplish five major goals: 1) to measure and document program effectiveness, 2) to assess student proficiency in core academic skill areas, 3) to allow comparisons of performance with other programs nationwide, 4) to conduct benchmark and trend analyses, and 5) to meet the requirements of the Voluntary System of Accountability (VSA). In order to achieve these goals, test users are provided with skill- and context-based subscores. Users of large-scale outcomes assessments will find the paper presentation useful when making decisions about how to use information reported on an assessment's subdomains.
The Impact of 'No Opinion' Response Options on Data Quality in High-Stakes Evaluations
Presenter(s):
Dorinda Gallant, The Ohio State University, gallant.32@osu.edu
Tiffany Harrison, The Ohio State University, tkw97@aol.com
Aryn Karpinski, The Ohio State University, karpinski.10@osu.edu
Jing Zhao, Ohio State University, zhao.195@osu.edu
Abstract: Universities and colleges have traditionally used students' evaluation of instruction as part of the tenure and promotion process for faculty. The purpose of this study is to investigate the impact of 'no opinion' response options in high-stakes evaluations. Specifically, this study investigates 'no opinion' response options on students' evaluation of faculty instruction at a large Midwestern university. The sample consists of over 65,000 undergraduate students who completed evaluation of instruction instruments in courses taught by faculty during autumn quarter of 2005. Data analyses will include comparing responses for each item across college classification, comparing means for each item with and without the inclusion of the 'no opinion' response, and examining patterns in item responses. Policy implications for the results of this study will be discussed.
Using Cognitive Interviewing to Grasp Cultural Context: Evidence of Construct Validation Among Under-represented Populations
Presenter(s):
Valerie K York, Kansas State University, vyork@ksu.edu
Sheryl A Hodge, Kansas State University, shodge@ksu.edu
Christa A Smith, Kansas State University, christas@ksu.edu
Abstract: Cognitive interviewing was utilized to better understand unexpected survey results between two groups of traditionally under-represented students in the Engineering field. These students had been randomly assigned into groups of scholarship recipients and matched-cohort controls. Unexpectedly, survey results indicated more positive outcomes for the control group over time in terms of their perceptions of both job expectations and STEM (Science, Technology, Engineering, and Mathematics) field efficacy. Speculating that interpretation of survey items may have played a part in these findings, the evaluation team engaged in cognitive interviewing of the students to gather their feedback on the items. These results were also compared to those obtained through cognitive interviewing conducted with a group of evaluators to determine the necessity of conducting the cognitive interviewing process with the sample of interest. Implications are discussed related to the utility of cognitive interviewing when evaluating programs targeting traditionally under-represented populations.
Maintaining High Quality Responses and Response Rates Over Time: The Challenges and Solutions in Evaluations of Undergraduate Summer Science Research Programs
Presenter(s):
Courtney Brown, Indiana University, coubrown@indiana.edu
Rebekah Kaletka, Indiana University, rkaletka@indiana.edu
Abstract: This paper will discuss methods used to successfully increase both short- and long-term response rates of college students participating in a summer research program. Specifically, the paper will focus on methodologies that engage participants even after they leave the program and become increasingly occupied and mobile. Examples from both a nationwide multi-site and a single-site undergraduate summer science research will be discussed. Personal interaction and relationship building with participants and program directors will be presented as methods for gaining engagement. The use of technology such as web-based surveys, social networking websites, text messaging, and program and participant profile web pages will be addressed. These will be discussed as they are effective methods to maintain contact with this transient population. These technologies allow for distribution of information, maintenance of up-to-date contact information, and collection of follow-up data which together help to increase both the rate and quality of responses.
Proxies and the Autoptic Proference: Using a Logic Model to Construct and Deconstruct the Teacher Education Accreditation Council (TEAC) Accreditation of an Educational Leadership Program
Presenter(s):
John Hanes, Regent University, jhanes@regent.edu
Glenn Koonce, Regent University, glenkoo@regent.edu
Abstract: We use an elaborated general program logic model to construct an overview of an Educational Leadership Program in its university and societal contexts. This model reflects the relationships among the standard inputs, processes, outputs, and outcomes as well as the Teacher Education Accreditation Council (TEAC) Quality Principles and Inquiry Brief components. The six main Interstate School Leaders Licensure Consortium (ISLLC) standards provide both a set of status claims and an initial group of outcome variables for the model. We place the 20 pieces of evidence from Appendix E of the TEAC Inquiry Brief within the structure of the model, and we also integrate the other chapters and appendices from the Brief. This initial model prompts some questions regarding the nature of the evidence and what is actually being measured and assessed. In most cases, the evidence that supports the status claims represents proxies for what we really want to know about our graduates. Utilizing Wigmore's idea of the autoptic proference, we explore the model with the intent of addressing a broader array of outcomes that relate to our ultimate customers and to our goals as a school of education. From an extended perspective based upon a three year engagement with the accreditation process, we also inquire into the nature of value-added and causal claims within the realm of available evidence, now and in the future.

 Return to Evaluation 2009

Add to Custom Program