Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Advances in Measurement
Multipaper Session 548 to be held in Sebastian Section I2 on Friday, Nov 13, 1:40 PM to 3:10 PM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
Raymond Hart,  Georgia State University, rhart@gsu.edu
Implicit Attitude Measures: Avoiding Social Desirability in Evaluations
Presenter(s):
Joel Nadler, Southern Illinois University at Carbondale, jnadler@siu.edu
Abstract: Implicit methods allow the assessment of automatic attitudes using indirect measures. Implicit measures of attitudes, or automatic reactions, have been researched extensively outside of evaluation. Implicit measures include word completion, response time measures, non-verbal behavior, and most recently the Implicit Association Test (IAT). Implicit results are often weakly related to explicit (self- report) measures when social desirability concerning the attitude is involved. However, when there is no socially expected 'right' answer, there is a stronger relationship between the two methodologies. The advantage to implicit measures is they are more resistant to deception and the effect of social desirability compared to self-report measures. Disadvantages include issues of construct validity and interpretation. Types of implicit methodologies will be reviewed with specific focus on how implicit measures can add to traditional attitudinal measures used in evaluations. Theoretical application, practical concerns, and possible appropriate use will be discussed.
The Validity of Self-Report Measures: Comparisons From Four Designs Incorporating the Retrospective Pretest
Presenter(s):
Kim Nimon, University of North Texas, kim.nimon@gmail.com
Drea Zigarmi, Ken Blanchard Companies, drea.zigarmi@mindspring.com
Abstract: This study compared data resulting from four evaluation designs incorporating the retrospective pretest, analyzing the interaction effect of pretest sensitization and post-intervention survey format on a set of self-report measures. Validity of self-report data were assessed by correlating results to performance measures. This study detected differences in measurement outcomes across the four designs. This study found designs in which the posttest and retrospective pretest were administered as two separate questionnaires produced the most valid results. Conversely, designs in which the posttest and retrospective pretest were administered with a single questionnaire produced the least valid results.
Quantifying Impact and Outcome: The Importance of Measurement in Evaluation
Presenter(s):
Ann Doucette, George Washington University, doucette@gwu.edu
Abstract: To ignore the implications of measurement is tantamount to conceptualizing outcomes research as a house of cards, subject to the vagaries of measurement artifacts. This paper examines the measurement properties of the University of Rhode Island Change Assessment Scale (URICA), a scale addressing readiness to change behavior, characterized by four discrete stages ranging from resistance and ambivalence about engaging in treatment to behavioral changes and strategies to maintain change behavior and treatment goals. The URICA assumes a unidimensional construct where individuals move back and forth across the four stages in an ordered fashion. IRT models will be used to examine the measurement properties of the URICA, using sample data from Project Match, a multi-site clinical trial examining patient/client-treatment interactions. The dimensionality of the URICA, its precision in assessing 'readiness to change' will be examined using both conventional factor analysis and bi-factor models. The assumed measurement model assumptions will be tested and the implications of model degradation will be discussed.
(Re)Defining Validity in Effectiveness Evaluation
Presenter(s):
Tanner LeBaron Wallace, University of Pittsburgh, twallace@pitt.edu
Abstract: This paper argues for a redefinition of validity in effectiveness evaluation. Integrating traditional validity typologies associated with experimentally-designed evaluations, validity theory derived from the discipline of psychometrics, and validity theory emerging from within the community of evaluation scholars, this paper advances a definition of validity that considers validity ideally conceptualized as an argument about research procedures in terms their ability to create 'ideal epistemic conditions' to investigate effectiveness and within a framework that considers the social consequences of effectiveness evaluation. First, the evolution of validity theory within psychometrics and evaluation is discussed. Next, three particular facets of a comprehensive validity argument are detailed: (1) the utility and relevance of the focus, (2) how diverse values are incorporated into and represented throughout the evaluation process and (3) the ability of the evaluation to support the move from low-level to high-level generalizations. The paper ends with a discussion of the design and methods implications of this redefinition of validity.

 Return to Evaluation 2009

Add to Custom Program