Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Multisite Evaluations: Challenges, Methods, and Approaches in Public Health
Panel Session 817 to be held in the Agate Room Section B on Saturday, Nov 8, 8:00 AM to 9:30 AM
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
Chair(s):
Thomas Chapel,  Centers for Disease Control and Prevention,  tchapel@cdc.gov
Abstract: Consensus of key players regarding a program and its components is optimal for any evaluation, but it is particularly challenging in multisite evaluations. In federal and state programs that are implemented by networks of grantees and frontline practitioners, the evaluation process is a formidable one because evaluation skills and availability of data sources vary site by site. More importantly, in multisite evaluations, beyond agreement on the high-level purpose of the program, the frontline activities can differ widely. Representatives from the three programs discussed on this panel have had to face the challenges of monitoring performance at multiple sites, and for some, they have had to use site data to illustrate overall program performance nationally. Program representatives will discuss their programs, involvement of their grantees and partners in developing evaluation approaches, and the approaches taken. The process for developing and implementing the evaluation will be discussed as will decisions on where to impose uniformity or grant autonomy in indicators and data collection. Transferable lessons from their experience will be identified.
Advantages and Challenges of a Multi-Site, Multiple-Method Evaluation: Centers for Disease Control and Prevention's (CDC's) Colorectal Cancer Screening Demonstration Program
Amy DeGroff,  Centers for Disease Control and Prevention,  adegroff@cdc.gov
Laura Seeff,  Centers for Disease Control and Prevention,  lseff@cdc.gov
Florence Tangka,  Centers for Disease Control and Prevention,  ftangka@cdc.gov
Blythe Ryerson,  Centers for Disease Control and Prevention,  aryerson@cdc.gov
Janet Royalty,  Centers for Disease Control and Prevention,  jroyalty@cdc.gov
Jennifer Boehm,  Centers for Disease Control and Prevention,  jboehm@cdc.gov
Rebecca Glover-Kudon,  University of Georgia,  rebglover@yahoo.com
Judith Priessle,  University of Georgia,  jude@uga.edu
In 2005, CDC funded the Colorectal Cancer Screening Demonstration Project for three years to assess the feasibility of providing community-based, colorectal cancer screening for low income populations. An evaluation is being conducted across the five sites with a focus on assessing implementation costs, processes, and screening outcomes. Evaluation methods address both the program-level and patient-level and include a cost assessment, longitudinal case study, and analysis of patient-level data. The multi-disciplinary evaluation team includes economists, epidemiologists, program evaluators, clinicians, data management specialists, and health educators. Evaluation results will have important policy implications. The evaluation methodology will be described, outlining the individual protocols for each of the three evaluation strategies and highlighting efforts to involve key stakeholders. In addition, the challenges and limitations faced in evaluating a multi-site program in which each individual site is implementing a unique program model, including the use of different screening modalities, will be discussed.
Evaluating a Multi-Site HIV Testing Campaign: Addressing Real-World Challenges of Local Data Collection
Jami Fraze,  Centers for Disease Control and Prevention,  jfraze@cdc.gov
Jennifer Uhrig,  RTI International,  uhrig@rti.org
Kevin Davis,  RTI International,  kcdavis@rti.org
Doug Rupert,  RTI International,  drupert@rti.org
Ayanna Robinson,  Porter Novelli,  ayanna.robinson@porternovelli.com
Jennie Johnston,  Centers for Disease Control and Prevention,  jjohnston1@cdc.gov
Laura McElroy,  Centers for Disease Control and Prevention,  lmcelory@cdc.gov
The multi-faceted Take Charge. Take the Test. campaign encouraged at-risk African American women to get HIV tested in two cities from October 2006 to October 2007. Evaluators were challenged to design an evaluation that would accurately capture campaign exposure and testing behaviors. The evaluation had to: 1) Collect campaign data from multiple partners without overburdening them amidst database changes, 2) Interpret preliminary findings to quickly redirect activities, and 3) Survey target audience with adequate power despite limited sample availability. The team then designed a comprehensive, useful evaluation informed by a logic model, key stakeholders, evaluation experts, and published literature to: 1)Obtain consistent monthly data on HIV tests, hotline calls, web hits, events, and partner outreach by working closely with partners and providing incentives, 2) Provide preliminary results monthly to campaign implementers, and 3) Administer an internet efficacy survey with adequate sample size to accurately measure key outcomes.
Addressing the Challenges of Multi-Site Evaluation for the Georgia Family Connection Partnership
Adam Darnell,  Georgia State University,  darnelladam@hotmail.com
James Emshoff,  Georgia State University,  jemshoff@gsu.edu
Steve Erickson,  EMSTAR Research,  ericksoneval@att.net
The Georgia Family Connection initiative is a statewide network of community collaboratives that aims to address health-, education- and economic-related outcomes for Georgia's children and families. Evaluation efforts for Georgia Family Connection include three components: local evaluation undertaken by each collaborative, and sub-county and county-level evaluations conducted by the state evaluation team. We discuss practical challenges pertaining to multisite evaluation for each of these three evaluation efforts. For local evaluation, discussion will focus on the wide variation in methodological quality of evaluations conducted by each of 159 collaboratives given equal funding for evaluation. Sub-county and county-level evaluation efforts address aggregate effects of multiple collaboratives. Here our discussion will address challenges of operationalizing variables, data collection, and data analysis resulting from the fact that each collaborative is mostly free to address the unique conditions in its community however it sees fit. We also provide a brief report of evaluation findings.

 Return to Evaluation 2008

Add to Custom Program