2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: HIV, Tuberculosis and Pregnancy Prevention: Methods and Strategies for Evaluation
Multipaper Session 917 to be held in El Capitan A on Saturday, Nov 5, 12:35 PM to 2:05 PM
Sponsored by the Health Evaluation TIG
Chair(s):
Herb Baum,  REDA International Inc, drherb@jhu.edu
Usability Testing of an Evidence-Based Teen Pregnancy/STI Prevention Program for American Indian/Alaska Native Youth: A Multi-Site Assessment
Presenter(s):
Ebun Odeneye, University of Texas, Houston, ebun.o.odeneye@uth.tmc.edu
Ross Shegog, University of Texas, Houston, ross.shegog@uth.tmc.edu
Christine Markham, University of Texas, Houston, christine.markham@uth.tmc.edu
Melissa Peskin, University of Texas, Houston, melissa.f.peskin@uth.tmc.edu
Stephanie Craig-Rushing, Northwest Portland Area Indian Health Board, scraig@npaihb.org
David Stephens, Northwest Portland Area Indian Health Board, dstephens@npaihb.org
Abstract: After a 15-year decline, the national teen birth rate in this country increased between 2005-2007; American Indian/Alaska Native (AI/AN) youth had the greatest increase (12%) compared to other ethnic groups. AI/AN youth also experience significant STI/HIV disparities compared to other US teens. As part of the formative evaluation phase of a larger study, 'Innovative Approaches to Preventing Teen Pregnancy among American Indian and Alaska Native Youth', we will conduct an intensive usability testing with AI/AN youth in three regions (n=90) to determine parameters for adaptation of an evidence-based program, 'It's Your Game, Keep It Real' (IYG), for this population. Data will be gathered on satisfaction with the user interface, ease of use, acceptability, credibility, motivational appeal, and applicability of IYG. Youth will evaluate each activity on the usability parameters, noting content, design, and thematic problems and ideas on improvement. Lessons learned from planning and implementing usability testing will be discussed.
A Comparison of Methods for Evaluating Implementation Fidelity of a Pregnancy and HIV Prevention Program
Presenter(s):
Pamela Drake, ETR Associates Inc, pamd@etr.org
Jill Glassman, ETR Associates Inc, jillg@etr.org
Lisa Unti, ETR Associates Inc, lisau@etr.org
Abstract: We used an RCT design to evaluate an online training program for over 200 educators implementing the Reducing the Risk program. Our primary outcome of interest was implementation fidelity. We used several methods for measuring implementation fidelity: educator implementation logs for each lesson, interviews with educators after implementing specific lessons, in-person observations, audio observations, follow-up interviews, and post surveys. We are analyzing and comparing all data sources to determine how valid the educator self-report logs are, which method(s) appears to provide the most valid data source, how validity appears to vary across the data sources, and what information each data source adds to the measurement of implementation fidelity. The results will provide important information for improving the quality of the various types of measures of implementation fidelity and recommendations for which sources yield the best quality data given various resource constraints.
Strategies for Evaluating Tuberculosis Control and Prevention Programs
Presenter(s):
Lakshmy Menon, Centers for Disease Control and Prevention, lmenon@cdc.gov
Awal Khan, Centers for Disease Control and Prevention, aek5@cdc.gov
Abstract: This paper will present various monitoring and evaluation strategies (of specific interventions and routine program activities) used by tuberculosis (TB) control and prevention programs in both national and international settings (low-burden and high-burden settings, non-governmental as well as governmental organizations). Case studies will be used to illustrate types of evaluations (including cost-benefit analysis, feasibility and impact studies, and quality improvement); challenges and successes encountered during the implementation of each evaluation will also be presented. Time and resources are often limited when conducting an evaluation of existing health program. This paper will highlight approaches that work so overburdened staff or evaluators can undertake a successful evaluation of their program.
From Interviews to Implementation: Conducting Formative Evaluation to Implement an Access to Care Intervention
Presenter(s):
Sarah Chrestman, Louisiana Public Health Institute, schrestman@lphi.org
Michael Robinson, Louisiana Public Health Institute, mrobinson@lphi.org
Jack Carrel, Louisiana Office of Public Health, jack.carrel@la.gov
Susan Bergson, Louisiana Public Health Institute, sbergson@lphi.org
Snigdha Mukherjee, Louisiana Public Health Institute, smukherjee@lphi.org
Abstract: Louisiana Positive Charge is part of a multi-site national evaluation assessing linkage to medical care for HIV+ individuals. Formative evaluation was conducted to understand why some people get into care and remain in care at the time of diagnosis and others do not. Key informant interviews were conducted with persons diagnosed with HIV at STD clinics to assess their diagnosis experience and linkage to care. The results were used to develop and implement the evaluation plan. Eleven of 12 respondents were satisfied with their testing experience, and all were linked to medical care. The majority reported their desire to live/be healthy as the reason they got into care. Suggested reasons for people not entering medical care were fear/denial, stigma and transportation. Topics to be discussed include results of formative evaluation, values of stakeholders, how the evaluation drives data collection/analysis, and how the values compel changes in methodology.

 Return to Evaluation 2011

Add to Custom Program