Evaluation 2009 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Developing Effective Surveys
Multipaper Session 136 to be held in Wekiwa 8 on Wednesday, Nov 11, 4:30 PM to 6:00 PM
Sponsored by the AEA Conference Committee
Chair(s):
Lija Greenseid,  Professional Data Analysts Inc, lija@pdastats.com
Using Context to Evaluate Survey Items
Presenter(s):
Michael Burke, RTI International, mburke@rti.org
Abstract: Question reliability is an often overlooked possibility that may lead to an attenuation of results or, even worse, incorrect assessments of relationships between variables. Pretesting, cognitive testing, and disaster checks are all synonyms of the approach suggested, but the approach is unique in that is specifically calls for participants in the pretesting to draw on their own experiences and apply context to the items or issues at hand. The techniques that will be taught emphasize evaluator interpretation and expertise over subject/participant knowledge and forthrightness. As such, the session teaches evaluators how to collect useful information without having advance or complete knowledge about the way clients interpret or understand survey items or communication messages. Greater use of such methods will, it is hoped, reduce unreliability and increase the quality of evaluations conducted.
Addressing Context by Mixing Methods in Survey Development
Presenter(s):
Katherine Ryan, University of Illinois at Urbana-Champaign, k-ryan6@illinois.edu
Nora Gannon, University of Illinois at Urbana-Champaign, ngannon2@illinois.edu
Abstract: Attention to context in warranting inferences from data has gained increasing importance across all domains of evaluation (e.g., education, health care, environment studies) (Julnes & Rog, 2008). One result of this trend is a stronger need for robust methods to produce quality instruments used in evaluation (Desmione & Le Floch, 2008). This paper explores the use of a mixed methods sequential design for questionnaire development in a large-scale evaluation. In preparing for the pilot study, a sequential design was tailored to capture multiple perspectives from the diverse participants intended to complete the questionnaire and from individuals with expert knowledge of the questionnaire aims. Being inclusive of these perspectives permits revisions of the questionnaire that will increases consistency in interpretability across participants (e.g., teachers vs. principals) and between various contexts (e.g., low vs. high achieving schools). By utilizing this design, there is an anticipated increase in the quality of data and strengthening of the inferences warranted from that data to be used by decision makers in developing policies.
Stuck in the Middle: The Use and Interpretation of Mid-Points in Surveys
Presenter(s):
Joel Nadler, Southern Illinois University at Carbondale, jnadler@siu.edu
Rebecca Weston, Southern Illinois University at Carbondale, weston@siu.edu
Abstract: Likert type scales are common in survey research. Research suggests Likert type scales should utilize four to seven response options. What seems more a matter of taste is the use of a mid-point in Likert type scales. Researchers can use an even number of response options forcing a choice or use an odd number of responses allowing neutrality. The authors conducted a study comparing different response options on the same set of 28 attitudinal questions. Participants answered questions using one of the following: a 4 point scale (forced choice), a 5 point scale (3 represented 'Neither'), or a 4 point scale with an option of 'No Opinion' after the scale. Results indicated 25% of item means were significantly affected by different response options. Additionally, 'neither' was chosen significantly more often than 'no opinion' on 80% of the items. Implications of this study on response choices in evaluation will be discussed.
The Current Debates About Impact Evaluation Using Randomization: A Political and a Scientific Perspective
Presenter(s):
Rahel Kahlert, University of Texas at Austin, kahlert@mail.utexas.edu
Abstract: The paper presentation analyzes the current debates about impact evaluation using randomization. The controversial issue remains whether the randomized controlled experiment (RCT) represents the 'gold standard' among evaluation approaches, regardless of the context of an evaluation. The author analyzes the randomization debate in U.S. education and in international development since the turn of the twentieth century comparatively. The presenter employs both a political perspective and scientific perspective (cf., Weiss 1972; Vedung 1998) to explain the rationales behind the promotion strategies of the respective sides in the debate. The presentation discusses strengths and limitations of randomized evaluation approaches for social and educational interventions, as put forward by both promoters and skeptics of randomized experiments. The author analyzes the arguments advanced by both sides and proposes ways in which this debate could be mediated.

 Return to Evaluation 2009

Add to Custom Program