2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Methodological Choices in Assessing the Quality and Strength of Evidence on Effectiveness
Panel Session 228 to be held in Texas A on Thursday, Nov 11, 9:15 AM to 10:45 AM
Sponsored by the Government Evaluation TIG
Chair(s):
Valerie J Caracelli, United States Government Accountability Office, caracelliv@gao.gov
Abstract: This panel aims to explore the methodological choices evaluators face in attempting to review a body of evaluation evidence to learn “what works”, i.e., what interventions or approaches are effective in trying to achieve a given outcome. When asked to assess a new initiative to identify effective social interventions, GAO discovered that 6 federally-supported efforts with the same basic purpose had been operating in diverse content areas for several years. While all 7 evaluation reviews assess evaluation quality on similar social science research standards, some reviews included additional criteria or gave greater emphasis to some issues than others. They also differed prominently in the approaches they took to the next step - synthesizing credible evaluation evidence to draw conclusions about whether an intervention was effective or not. This panel will explore the methodological choices such efforts face, and what features of the evaluations or context influenced their decisions.
Comparing the Top Tier Evidence Approach to Other Systematic Evidence Reviews
Stephanie Shipman, United States Government Accountability Office, shipmans@gao.gov
Valerie J Caracelli, United States Government Accountability Office, caracelliv@gao.gov
This paper will provide an introduction to the issues by discussing federal policy interest in identifying high quality evidence on effective interventions, and briefly describing the Top Tier Evidence initiative – a systematic evidence review conducted by the private, nonprofit Coalition for Evidence-Based Policy – and the congressional request for GAO to assess the validity of that effort. We will then describe the scope of the six federally-supported systematic evidence reviews that we selected for comparison and the general steps they take to first, assess the quality of evaluation evidence, and then, synthesize the credible evidence to draw conclusions about intervention effectiveness. We will point out areas of similarity and differences between their approaches to set-up the other panelists’ in-depth discussions of the issues they considered and the rationales for their methodological and analytic choices.
Methodological Considerations in Selecting Programs for the Model Programs Guide
Marcia Cohen, Development Services Group Inc, mcohen@dsgonline.com
The Office of Juvenile Justice and Delinquency Prevention’s Model Programs Guide (MPG) is a searchable database that allows practitioners and researchers to search for information and research on more than 200 evidence-based prevention and intervention programs. The MPG conducts reviews to identify effective programs on the topics of delinquency; aggression and violent behavior; children exposed to violence; gang involvement; alcohol, tobacco, and other drug use; academic problems; family functioning; sexual activity and exploitation; and mental health issues. This paper reviews the evidence requirements that must be met for programs to be included in the MPG, and discusses the review process as well as the four dimensions of the program review criteria—conceptual framework, program fidelity, design quality, and outcome evidence. Components of the rating instrument and rating system will also be discussed. Evaluation quality standards that can be used by researchers to assess the evidence on program effectiveness are proposed.
National Registry of Evidence-based Programs and Practices Approach to Evaluating the Evidence
Kevin Hennessy, United States Department of Health and Human Services, kevin.hennessy@samhsa.hhs.gov
The Substance Abuse and Mental Health Services Administration (SAMHSA) within the U.S. Department of Health and Human Services developed the National Registry of Evidence-based Programs and Practices (NREPP – www.nrepp.samhsa.gov) as a searchable on-line tool to assist States and community-based organizations in identifying and assessing both the evidence strength and the dissemination support for interventions and approaches to preventing and treating mental and/or substance use disorders. NREPP is one way that SAMHSA works to improve access to information on tested interventions and thereby reduce the lag time between the creation of scientific knowledge and its practical application in the field. With this in mind, the presentation will highlight NREPP’s approach to evaluating behavioral health interventions, and discuss the methodological, practical and political considerations and decisions behind SAMHSA’s choice of this approach.
The Evidence-based Practice Centers: A National Approach to Systematically Assessing Evidence of Effectiveness of Health Care Interventions
Jean Slutsky, United States Department of Health and Human Services, jean.slutsky@ahrq.hhs.gov
In 1997 the Agency for Health Care Policy and Research (AHCPR), now known as the Agency for Healthcare Research and Quality (AHRQ), launched its initiative to promote evidence-based practice in everyday care through establishment of Evidence-based Practice Centers (EPCs). The EPCs develop evidence reports and technology assessments on topics relevant to clinical, social science/behavioral, economic, and other health care organization and delivery issues—specifically those that are common, expensive, and/or significant for the Medicare and Medicaid populations. With this program, AHRQ became a "science partner" with private and public organizations in their efforts to improve the quality, effectiveness, and appropriateness of health care by synthesizing the evidence and facilitating the translation of evidence-based research findings. The EPC’s now number 14 and have become the cornerstone in AHRQ’s program of comparative effectiveness research.

 Return to Evaluation 2010

Add to Custom Program