2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Weighting and Measurement in Evaluation Quality for Educational Arenas
Multipaper Session 991 to be held in Santa Monica on Saturday, Nov 5, 2:20 PM to 3:50 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Karen Larwin,  Youngstown State University, khlarwin@ysu.com
Evaluating Measurement Invariance in Educational Settings
Presenter(s):
Lihua Xu, University of Central Florida, lihua.xu@ucf.edu
Abstract: In the world of education, tests and mental measurements are used tremendously to assess students' certain attributes (e.g., students' degree of achievement motivation or level of anxiety). Educational researchers often compare the effect of group differences in different attributes or constructs on academic achievement in attempt to understand the possible causes of achievement gap in various subject areas. However, one important concern raised up within the last few years with the measurements is whether the groups (e.g. gender, ethnicity, age etc.) interpret the measurement the same way. If inconsistent interpretation of the measurements exists between groups, the results of group comparison at the level of mean comparisons of observed variables are misleading. Therefore, this paper will discuss the important concept of measurement invariance and steps to evaluate invariance and at the end a step-by-step illustration using cross-cultural data will be presented.
It's all Relative: Applying Relative Weight Analysis to Understand and Improve Programs, Practices and Data Collection Methods
Presenter(s):
Emily Hoole, Center for Creative Leadership, hoolee@ccl.org
Abstract: This session will focus on how evaluators can use Relative Weight Analysis (RWA) to go beyond multiple regression in understanding the relative importance of predictors in explaining an outcome, especially when the predictors are correlated. Using several examples of how this method can be used to understand social phenomenon, improve programs and improve survey design, this introduction will add an additional tool to the evaluator's quantitative toolbox. At the Center for Creative Leadership, relative weight analysis has been used to understand and improve the overall participant experience, review the most critical questions on an end of program survey, explore patient satisfaction in a public health clinic, and understand what elements of feedback coaching are most effective. These results provide deeper insights and actionable data for evaluators and stakeholders. Opportunities and challenges in using the method will be explored as well as practical aspects of applying RWA to a dataset.
A Posteriori Evaluation Of Assessments: Comparison of Hierarchical Cluster Analysis and Confirmatory Factor Analysis
Presenter(s):
Vasanthi Rao, University of South Carolina, raov@mailbox.sc.edu
Min Zhu, University of South Carolina, helen970114@gmail.com
Robert Johnson, University of South Carolina, rjohnson@mailbox.sc.edu
Kristina Ayers Paul, University of South Carolina, paulka@mailbox.sc.edu
Abstract: Assessments used in large scale testing programs are carefully designed to measure a construct. Efforts in the developmental stages of an assessment involve experts in review by using statistical, curriculum, administrative, and political lenses. Once the test is administered, post-hoc evaluations of assessment quality focus on construct-related issues such as item difficulty, discrimination, etc. In this paper, we propose that the examination include a rigorous scrutiny of ancillary claims made by the assessment, such as cognitive levels of test items, and confirm if they have been met. Using hierarchical cluster analysis (HCA) and confirmatory factor analysis (CFA), we look at a visual arts assessment(N= 4386) that was designed to contain an allotted percentage of test items at different levels of cognitive skills using the original Bloom's taxonomy (Bloom, 1956). We compare and contrast HCA and CFA as an example of post-hoc analysis of assessments to validate this claim.

 Return to Evaluation 2011

Add to Custom Program