|
Session Title: Making Sense of Measures
|
|
Panel Session 941 to be held in Santa Monica on Saturday, Nov 5, 12:35 PM to 2:05 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
|
| Chair(s): |
| Mende Davis, University of Arizona, mfd@u.arizona.edu
|
| Abstract:
In every evaluation, there is an overabundance of measures to choose from. Which instrument should we choose? What can we use to guide our decisions? Evaluators often select instruments based on their previous experience, ready access to an instrument, its cost, whether the evaluation staff can administer the measure, and the time and effort required to collect the data. Every evaluation has a budget, in time, money, and respondent burden, and measures must fit within the budget. When inundated with measures, it is hard to see the differences between them. We suggest that a taxonomy can be used to categorize social science methods and guide a more effective selection of study measures. Some methods require considerable effort to collect, others are simple and inexpensive. Low-cost alternatives are often overlooked. In this panel, we will provide an overview of method characteristics and how to take advantage of them in evaluation designs.
|
|
Methods!
|
| Mende Davis, University of Arizona, mfd@u.arizona.edu
|
|
The first presentation will provide an overview to a taxonomic approach to methods and illustrate how it can be used to organize potential evaluation measures for health outcomes. In the process, we will examine the overlap of methods and data collection methods that may show promise for further research.
|
|
|
Who? Me?
|
| Sally Olderbak, University of Arizona, sallyo@email.arizona.edu
|
| Michael Menke, University of Arizona, menke@email.arizona.edu
|
|
Self-report is the primary workhorse of evaluation research. Ratings by others are harder to obtain but may be a better choice, when self-rating is impossible. We will review the existing literature regarding self-rating and other raters with a focus on their validity and reliability. We will present a novel empirical example of peer rating. This panel presentation will also include sources of no-cost instruments with documented reliability and validity that can be used in evaluation research.
| |
|
How Many Items?
|
| Sally Olderbak, University of Arizona, sallyo@email.arizona.edu
|
|
This presentation will focus on the literature regarding single versus multiple-item instruments and discuss the advantages and disadvantages of increasing numbers of items in terms of reliability and validity. We will illustrate this talk with the development of the Arizona Life History Battery and its short form, the 20 item Mini-K. The Arizona Life History Battery (ALHB) is a 199-item battery of cognitive and behavioral indicators of life history strategy compiled and adapted from various different original sources. The Mini-K Short Form (Figueredo et al., 2006), and may be used separately to substitute for the entire ALHB and reduce research participant response burden.
| |
|
More Methods!
|
| Michael Menke, University of Arizona, menke@email.arizona.edu
|
| Sally Olderbak, University of Arizona, sallyo@email.arizona.edu
|
| Mende Davis, University of Arizona, mfd@u.arizona.edu
|
|
The final presentation will demonstrate how a method taxonomy can be used to organize social science outcome measures for Alzheimer's dementia. Instruments with different names are frequently considered to be different methods; however, this is often not the case. Two multi-item paper-and-pencil tests may share all of the same method biases. If an evaluation relies on measures that are similar in nearly every way, the study results can be biased. When evaluators have the opportunity to include multiple measures, it is important to make sure that the multiple instruments are actually different. Evaluators constantly deal with cost limitations in all phases of evaluation research. We suggest that evaluators think in terms of 'item' budgets as well.
| |