2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Promoting Quality Impact Studies: Constructive, Context-Appropriate Policies for Strengthening Research Designs for Impact Evaluations
Panel Session 506 to be held in Lone Star E on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Quantitative Methods: Theory and Design TIG
Chair(s):
George Julnes, University of Baltimore, gjulnes@ubalt.edu
Abstract: The recent controversy over the role and value of random assignment experiments, particularly with regard to the U.S. Department of Education, has raised the issues of what are strong designs for evaluations (more likely to yield valid impact estimates and judgments of quality) and when they should be employed. As the evaluation community has grappled further with these issues, some tentative resolution of the controversy has resulted. This session, following the four presenters, will provide a framework for assessing the context as it relates to value of alternative evaluation designs, report on a government assessment of when different designs are most appropriate, provide an example of mixing methods to strengthen a particular research design, and suggest multiple dimensions to consider in evaluating the value of different designs.
What's in an Evaluation Design? Matching the Policy Questions to the Program and Evaluation Context Before Making Methodological Choices
Eleanor Chelimsky, Independant Consultant, eleanor.chelimsky@gmail.com
Policy questions posed to evaluators in government may not always be the right questions. I argue in this paper that when evaluators develop their evaluation design, they should delay making methodological choices until they have examined, among other factors: the historical and political context of the program; the quality of prior evaluations, their results, and the difficulties they encountered; controversy over goals, program design, etc.; specific positions of sponsors and stakeholders; and a host of other issues like whether there is a need for participation, the existence of public data-sets; time allotted versus evaluative requirements, and so on. Only then can the evaluators determine the degree of fit between the questions posed and potential methodologies, and whether in fact those questions should stand or be changed.
A Variety of Rigorous Methods Can Help Identify Effective Interventions
Stephanie Shipman, United States Government Accountability Office, shipmans@gao.gov
While program evaluations take various forms to address different questions, federal policymakers are most interested in impact evaluations that help managers adopt effective practices to address national concerns. Concern about the quality of federal social program evaluations has led to calls for greater use of randomized experiments in impact evaluations. The randomized experiment is considered a highly rigorous approach for isolating program effects from other non-program influences, but it is not the only rigorous research design available and is not always feasible. To help congressional staff assess efforts to identify effective interventions, GAO was asked to identify 1) for what types of interventions randomized experiments were best suited for assessing effectiveness, and 2) what alternative evaluation designs are used to assess the effectiveness of other types of interventions. In this paper, we will report our answers drawn from an analysis of the evaluation methodology literature and consultation with evaluation experts.
Mixed-methods Evaluation Design for a Complex, Evolving Systems Initiative
Debra Rog, Westat, debrarog@westat.com
To assist systems in moving from managing to ending homelessness for families, the Gates Foundation is funding an initiative with three counties in the Pacific Northwest. This presentation will describe a longitudinal mixed-methods design for evaluating the Initiative’s implementation and effectiveness at systems, organizational, and family-levels. At the systems level, qualitative and quantitative data will be collected for the demonstration counties as well as two comparison counties. At the organizational level, selected case studies will assess the impacts on individual homeless serving organizations over time. At the family level, effects of the system on families’ experiences and outcomes will be assessed by comparing two cohorts of families – a “no intervention/early intervention” cohort of families identified in the first year and an “intervention” cohort of families identified in Year 3. Each cohort will be tracked for 18 months and compared to a comparison group of families constructed from state data.
Designing for Success With Impact Evaluations: Dimensions of Quality for Evidence to Be Actionable
George Julnes, University of Baltimore, gjulnes@ubalt.edu
There has been considerable controversy over efforts to promote “rigorous” evaluation methods that might yield evidence appropriate for guiding federal programs. While recent efforts at reconciliation among proponents of traditions such as random assignment experiments, qualitative evaluations, and performance management have been useful in reminding us of the contextual influences on appropriate designs, more work remains. This presentation presents a framework for evaluation design that balances a focus on the validity of impact estimates with a complementary focus on methods that support valid valuation of program impacts.

 Return to Evaluation 2010

Add to Custom Program