|
Using Theory of Change to Guide Multi-site, Multi-level, Mixed Method Evaluations
|
| Presenter(s):
|
| Andrea Hegedus,
Northrop Grumman Corporation,
ahegedus@cdc.gov
|
| Abstract:
Evaluators are often asked to design multi-site, multi-level evaluations of complex social systems. This type of assessment requires adapting standard methods of program evaluation to larger scales as well as unifying all efforts to achieve outcomes. One such tool for such a complex approach is the use of theory of change evaluation. Theory of change is an approach that link activities, outcomes, and contexts in a way that maximizes the attribution of interventions to outcomes. This presentation will define the theory of change approach, discuss its various components, describe the use of theory to address different levels of the evaluation model and specify outcomes, and choose appropriate methodologies to answer evaluation questions. As a result, theory of change can become an effective tool to help evaluators design complex, multi-faceted evaluations, ground activities in evidence-based theories, as well as increase the rigor of the evaluation process.
|
|
FY05 and FY06 Real Choice Systems Change Grants: Evaluating Medicaid Systems Transformation Across 18 States by Linking the Initiative-Level Evaluation to Local Grant Evaluation Efforts
|
| Presenter(s):
|
| Yvonne Abel,
Abt Associates Inc,
yvonne_abel@abtassoc.com
|
| Deborah Walker,
Abt Associates Inc,
deborah_walker@abtassoc.com
|
| Meg Gwaltney,
Abt Associates Inc,
meg_gwaltney@abtassoc.com
|
| Meredith Eastman,
Abt Associates Inc,
meridith_eastman@abtassoc.com
|
| Margaret Hargreaves,
Abt Associates Inc,
meg_hargreaves@abtassoc.com
|
| Susan Flanagan,
Abt Associates Inc,
sflanagan@westchesterconsulting.com
|
| Abstract:
Between FY2005 and FY2006, the Centers for Medicare and Medicaid Services (CMS) awarded Systems Transformation (ST) Grants to 18 states to support states’ efforts to transform their infrastructure to promote more coordinated and integrated long-term care and support systems that serve individuals of all ages and disabilities.
The Grants are structured into 1) a start-up phase that requires grantees to participate in a nine-month Strategic Planning process and to complete evaluation plans that link grant goals and objectives with outcomes and 2) an implementation phase during which grantees document their progress through web-based reports completed at six-month intervals. Abt Associates, as the initiative-level evaluator, designed data collection tools that collect consistent information across these key grant phases. This presentation highlights the influence of strategic planning on grant design and implementation, identifies factors that facilitate and challenge grant implementation, and links outcomes from local grant evaluations to the initiative-level evaluation.
|
|
Evaluation of a Multi-Site, Multi-Program Obesity Prevention Initiative: Incorporating Site-Level Measurement Tools and Program Theory into a Systematic Assessment of Initiative Strengths and Challenges
|
| Presenter(s):
|
| Zachary Birchmeier,
University of Missouri,
birchmeierz@missouri.edu
|
| Nathaniel Albers,
University of Missouri,
nathaniel.albers@gmail.com
|
| Caren Bacon,
University of Missouri,
baconc@missouri.edu
|
| Dana N Hughes,
University of Missouri,
hughesdn@missouri.edu
|
| Jill Nicholson-Crotty,
University of Missouri,
nicholsoncrottyj@missouri.edu
|
| David Valentine,
University of Missouri,
valentined@missouri.edu
|
| Charles Gasper,
Missouri Foundation for Health,
cgasper@mffh.org
|
| Abstract:
Chen’s (2005) Program Theory provides a framework for evaluating program success (i.e., Change Model) as a function of program capacity elements (i.e., Action Model) that are nested within the type and readiness of the target community. Applying the theory to an initiative with diverse program content and outcomes required defining and assessing program success and capacity across sites, as well as the characteristics of each target community. From the content of interviews and web-based surveys of program staff, observations at site visits, program internal evaluation data, as well as interviews with community informants, each site was scored on success, capacity and community elements using modified versions of existing tools. Regressions were used to account for between-sites variability. The system helped to identify the elements of capacity that were possessed by the most successful sites, and to translate those successes into recommendations for other sites that are nested in similar contexts.
|
| | |