|
Using the Pre-test/Post-test Only Design for Evaluation of Training
|
| Presenter(s):
|
| Jack McKillip,
Southern Illinois University, Carbondale,
mckillip@siu.edu
|
| Joan Rycraft,
University of Texas, Arlington,
rycraft@uta.edu
|
| Steven Wernet,
Saint Louis University,
spwernet@netzero.net
|
| Michael Patchner,
Indiana University Purdue University Indianapolis,
patchner@iupui.edu
|
| Edmund Mech,
Indiana University Purdue University Indianapolis,
mechresearch.qwest.net
|
| Abstract:
The Pretest-Posttest only design was used for evaluation of 501 training sessions of 6579 health and pregnancy counseling professionals on Adoption Awareness by the Nation Center For Adoption using a standardized curriculum. Secondary analyses indicated that training effects were very large on measures of knowledge, confidence, and self-rated skills (ds between 1.35 and 2.68). Training effects were larger for 3-day than 1-day training. Other between-session variance in training effects was very small (<5% using HLM). Extended posttests indicated effects were lasting. Use of 2 pretests or no pretest showed no practice effects nor pretest sensitization, although potential self-presentation effects were seen on the day of training. Strengths and flexibility of this design are discussed.
|
|
Comparison of Variations in Retrospective Pre-test (RPT) and Pre-test/Post-test Surveys Measuring the Outcomes of an Anti-violence Education Program
|
| Presenter(s):
|
| James Riedel,
Girl Scout Research Institute,
jriedel@girlscouts.org
|
| Abstract:
This study's purposes are to test the reliability of the retrospective pretest (RPT) survey design both between and within subjects and to measure outcomes of a violence prevention program. Project Anti-Violence Education teaches girls skills and strategies which reduce their chances of becoming a perpetrator and/or victim of violence through bullying prevention/intervention, gang prevention, crime prevention, and internet safety curricula.
Participants are randomly assigned to six measurement conditions –variations on the ordering of the question types in the RPT and pre-post surveys. (i.e., Before; After; Compared to now, before the program I . . .; and Compared to before the program, now I . . .)
The instruments were designed to measure constructs including conflict resolution, personal safety, healthy relationships, and decision-making. Data analyses also examine the reliability of RPT compared to pre/posttest, while controlling for the confound of pre-testing. Additionally, the effect of item ordering and the validity of comparative post-testing are assessed.
|
|
Using Randomized Control Trials to Learn What Works in Prevention
|
| Presenter(s):
|
| James Derzon,
Pacific Institute for Research and Evaluation,
jderzon@verizon.net
|
| Abstract:
In an ideal world, or the idealized world of the analog study, the Randomized Control Trial (RCT) is an elegant and irrefutable design for drawing causal inferences. However, when applied in real world research, the approach has proven a slow, conservative, and limited path to learning what works to prevent many problem behaviors. RCTs are expensive and limit knowledge generation to a handful of scientists. They focuses attention to (a) units that can be randomized, (b) subject (instead of intervention) characteristics, (c) and a single outcome of (legitimate) concern – effectiveness. The value of RCTs is evaluated based on the internal consistency of the design and estimates of statistical significance at the expense of generalizability and potential population impact. These and other implications of faith in the RCT will be discussed using evidence from meta-analysis and systematic reviews and an alternative, real world approach to learning what works will be presented.
|
|
They May Glitter, but are They Gold? Randomized Control Trials in Evaluation
|
| Presenter(s):
|
| Sheila Arens,
Mid-continent Research for Education and Learning,
sarens@mcrel.org
|
| Andrea Beesley,
Mid-continent Research for Education and Learning,
abeesley@mcrel.org
|
| Abstract:
Regardless of one's stance on how research ought to be conducted, evaluators working in organizations sometimes do not have the luxury of selecting what they might consider to be the most appropriate evaluation design. In this session, presenters will share experiences conducting randomized control trials (RCTs) and share challenges confronted when developing proposals for clients committed to utilizing this methodology. Methodological decisions affect every aspect of the evaluation. In this session, presenters will discuss ways to navigate methodological conversations with clients – including whether RCTs are reasonable given the program's age, fiduciary constraints, recruiting, and analysis and reporting. In addition, presenters will address what stands to be lost if evaluators fail to engage in such conversations.
|
| | | |