Return to search form  

Session Title: Methods in Evaluation
Multipaper Session 607 to be held in Mencken Room on Friday, November 9, 1:55 PM to 3:25 PM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Elizabeth Sale,  Missouri Institute of Mental Health,  liz.sale@mimh.edu
Comparing the Use Standardized and Site-specific Instrumentation in National and Statewide Multi-site Evaluations
Presenter(s):
Elizabeth Sale,  Missouri Institute of Mental Health,  liz.sale@mimh.edu
Mary Nistler,  Learning Point Associates,  mary.nistler@learningpt.org
Carol Evans,  Missouri Institute of Mental Health,  carol.evans@mimh.edu
Abstract: Choosing instrumentation in the evaluation of multi-site programs can be challenging. While cross-site evaluators may opt to use a standardized instrument across all sites, local programs may not be amenable to adopting cross-site instruments for a variety of reasons. First, outcomes measured by cross-site evaluators may simply not be of interest to local programs. Second, cross-site instrumentation may not be culturally appropriate or age-specific for a given site. Third, because cross-site evaluations using standardized instrument adopt a “one size fits all” mentality, they may fail to capture changes in individuals that could be captured using site-specific instruments. We compare instrumentation decisions in three multi-site studies (a early childhood program, a mentoring program, and a suicide prevention program) using standardized and program-specific instrumentation and their impact on both cross-site and local program. Implications for instrument selection in future multi-site evaluations are discussed.
Analysis of Nested Cross-sectional Group-Randomized Trials With Pretest and Posttest Measurements: A Comparison of Two Approaches
Presenter(s):
Sherri Pals,  Centers for Disease Control and Prevention,  sfv3@cdc.gov
Sheana Bull,  University of Colorado, Denver,  sheana.bull@uchsc.edu
Abstract: Evaluation of community-level HIV/STD interventions is often accomplished using a group-randomized trial, or GRT. A common GRT design is the pretest-posttest nested cross-sectional design. Two analytic strategies for this design are the mixed-model repeated measures analysis of variance (RMANOVA) and the mixed-model analysis of covariance (ANCOVA), with pretest group means as a covariate. We used data from the POWER (Prevention Options for Women Equal Rights) study to demonstrate power analysis and compare models for two variables: any unprotected sex in the last 90 days and condom use at last sex. For any unprotected sex, the RMANOVA approach was more powerful, but the ANCOVA approach was more powerful for the analysis of condom use at last sex. The difference in power between these models depends on the over-time correlation at the group level. Investigators designing GRTs should do an a priori comparison of models to plan the most powerful analytic approach.
Closing the Gap on Access and Integration: An Evaluation of Primary and Behavioral Health Care Integration in Twenty-four States
Presenter(s):
Elena Vinogradova,  REDA International Inc,  evinogradova@redainternational.com
Elham Eid Alldredge,  REDA International Inc,  alldredge@redainternational.com
Abstract: Four “Closing the Gap on Access and Integration: Primary and Behavioral Health Care” Summits were conducted in 2004 by the Health Resources and Services Administration in collaboration with SAMHSA. During these facilitated meetings, state teams developed state-specific strategic action plans that aimed to integrate mental health, substance abuse, and primary care services. During the following two years, a comprehensive evaluation of the summits' impact was conducted by REDA International, Inc. The evaluation utilized multiple sources of data and used a groundbreaking comparative multiple case study methodology. The data analysis revealed that the extent of the summits' impact on the states' efforts to integrate primary and behavioral health care was largely determined by a few critical factors that need to be better understood in future federal efforts to promote a state-level change.
System-level Evaluation: Strategies for Understanding Which Part of the Elephant Are We Touching?
Presenter(s):
Mary Armstrong,  University of South Florida,  armstron@fmhi.usf.edu
Karen Blase,  University of South Florida,  kblase@fmhi.usf.edu
Frances Wallace,  University of South Florida,  fwallace@fmhi.usf.edu
Abstract: Patton (2002) compares evaluations of complex adaptive systems to nine blind people, all of whom touch a different part of an elephant and thereby have different understanding of the elephant. He points out that from a system perspective, to truly understand the elephant, one must see it in its natural ecosystem. This paper will use a state-level evaluation of New Jersey's children's behavioral health system conducted in 2006 to illustrate the challenges and solutions confronting system level evaluators. Solutions utilized by the study team include a participatory action framework; a multi-method approach to data collection that included key stakeholder interviews, document reviews, web-based surveys, focus groups, analysis of administrative datasets to understand penetration rates, geographic equity, and service utilization, and interviews with caregivers and their case managers; and an interactive hermeneutic approach to data analysis and interpretation. The paper will conclude with a set of challenges for future system-level evaluators.
Search Form