|
Maximum Individualized Change Analysis: Evidence Supporting its Use
|
| Presenter(s):
|
| Eric Brown, University of Washington, ricbrown@u.washington.edu
|
| Roger Boothroyd, University of South Florida, boothroy@fmhi.usf.edu
|
| Abstract:
At the 2007 American Evaluation Association Conference (Boothroyd, Banks, & Brown, 2007), we described an analytic procedure, the maximum individualized change score method (Boothroyd, Banks, Evans, Greenbaum, & Brown, 2004), which we argued was potentially superior to traditional MANOVA approaches in making program comparisons in which substantial heterogeneity existed among the clients served, the services they received, and the outcomes they attained. We have since received funding from the National Institute of Mental Health (R03MH082445-01) to systematically assess the merits of this approach under various 'real world' data assumptions. The presentation will describe this analytic approach and summarize the findings from various simulation studies as well as the application of this method in a secondary analysis of data from a multi-site federally-funded study examining the impact of mental health managed care.
|
|
Examining the Context and Determining Evaluation Questions in Alcohol and Drug Prevention Programs
|
| Presenter(s):
|
| Robert LaChausse, California State University San Bernardino, rlachaus@csusb.edu
|
| Abstract:
Evaluators have long been encouraged to involve stakeholders in program evaluation activities to increase evaluation 'buy in' and subsequent use of evaluation information. An examination of the context in which the program operates can influence the type of evaluation questions and methods used. Many approaches to program evaluation emphasize the importance of examining the context but fail to articulate how this should be conducted. The use of stakeholder interviews and checklists can be useful to evaluators in understanding program context and improve how program evaluations are planned and conducted. An innovative approach to examining context and selecting evaluation questions will be presented. This paper will increase evaluators' competency in examining the context of alcohol and drug prevention programs and determining evaluation questions and methods while fostering evaluation utilization. An example from a drug prevention program for an ethnically diverse population will be used to illustrate these concepts and lessons learned.
|
|
Coping With the Quasi in Your Quasi-experimental Evaluation: Lessons Learned From a Mental Health Program Evaluation With Consumers and Case Managers
|
| Presenter(s):
|
| Lara Belliston, Ohio Department of Mental Health, bellistonl@mh.state.oh.us
|
| Susan Missler, Ohio Department of Mental Health, misslers@mh.state.oh.us
|
| Abstract:
In an effort to make mental health care consumer- and family-driven, Ohio is evaluating a program for mental health consumers and case managers that utilized outcomes feedback to foster more person-centered, collaborative, empowering and recovery-oriented treatment planning and case management. This evaluation study is funded via the SAMHSA Mental Health Transformation State Incentive Grant (MH-TSIG). The quasi-experimental wait-list-control evaluation design included case managers and consumers in four mental health agencies. Due to the nature of research in the real world, many quasi-experimental studies experience challenges with recruitment, attrition or mortality, and integrating archival data. Results will be presented from the evaluation and will show how statistical analyses may be used to adjust for threats to validity.
|
|
Longitudinal Examination of Facilitator Implementation: A Case Study Across Multiple Cohorts Of Delivery
|
| Presenter(s):
|
| Cady Berkel, Arizona State University, cady.berkel@asu.edu
|
| Melissa Hagan, Arizona State University, melissa.hagan@asu.edu
|
| Sharlene Wolchik, Arizona State University, wolchik@asu.edu
|
| Tim Ayers, Arizona State University, tim.ayers@asu.edu
|
| Sarah Jones, Arizona State University, sarahjp@asu.edu
|
| Irwin Sandler, Arizona State University, irwin.sandler@asu.edu
|
| Abstract:
Evaluation studies of evidence-based programs rarely include important implementation information that would enable valid conclusions about program outcomes. It is assumed that facilitators will fall victim to 'program drift,' with lower fidelity and greater adaptations over time and that program effects will weaken as a result (Kerr et al, 1985). The Concerns-Based Adaptation Model has been used to provide a framework for understanding facilitator implementation over time (Ringwalt et al, under review). The framework predicts that facilitators' implementation will become more fluid and responsive to the needs of participants with repeated delivery. We present results of an observational study of one facilitator's implementation across five waves of the Family Bereavement Program (FBP). Fidelity and adaptations will be coded by two coders. Based on the C-BAM framework, authors hypothesize that fidelity and responsive adaptations will increase as the facilitator becomes more familiar with the program content.
|
| | | |