|
Maximizing Follow-up Completion Rates in the Face of Real-world Constraints: Successes from a Tobacco Cessation Program Evaluation Project
|
| Presenter(s):
|
| Kay Calendine,
University of Arizona,
kcalendi@u.arizona.edu
|
| Sue Voelker,
University of Arizona,
smlarsen@u.arizona.edu
|
| John Daws,
University of Arizona,
johndaws@email.arizona.edu
|
| Abstract:
Programs whose evaluation components require the collection of participant follow-up contact data frequently face challenges with resources, limited time frames, and other real-world constraints, hindering the ability to achieve contact completion rates high enough to produce useful program evaluations.
To overcome these issues, the Arizona tobacco cessation program evaluation project identified several barriers on which to focus and developed strategies targeting these challenges. One identified barrier was a callback pool that was too large to be managed effectively with the available resources. The proposed solutions were to decrease the size of the participant follow-up pool by sampling, while focusing more effort toward locating difficult-to-reach participants
This paper presentation discusses the two main strategies implemented to address this barrier, the resulting benefits of these strategies, the project’s subsequent success in increasing the completion rate from 40% to over 80%, and suggestions for incorporating these strategies into other follow-up data collection projects.
|
|
Developing Effective, Non-Depressing Pre-Tests
|
| Presenter(s):
|
| Linda Heath,
Loyola University Chicago,
lheath@luc.edu
|
| Jonya Leverett,
Loyola University Chicago,
jlevere@luc.edu
|
| David Slavsky,
Loyola University Chicago,
dslavsk@luc.edu
|
| Abstract:
Pre-test data are crucial for assessing program effectiveness, but the very act of administering a pre-test can adversely affect the integrity of the research design and the program itself. Most designs use the same or alternate forms of measures for the pre- and post-tests. The measure needs to be at a high enough level to capture program gains, but administering such a high-level measure at the pre-test can lead to demoralization of the treatment group and disappearance of the comparison group. Valuable program time and resources then need to be spent restoring a sense of efficacy to the program participants and seeking post-test data from comparison group members. Grant budgets and schedules often preclude spending time developing less-threatening pre-tests on program participants. In this research, the effectiveness of developing Computerized Adaptive Tests (CAT) on Introductory Psychology students for ultimate use with public school teachers is explored.
|
| |