|
Session Title: The Challenges of Evaluating the Scale Up and Replication of Innovative Social Programs
|
|
Panel Session 460 to be held in Pacific D on Thursday, Nov 3, 4:30 PM to 6:00 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Gale Berkowitz, MasterCard Foundation, gberkowitz@mastercardfdn.org
|
| Discussant(s):
|
| Gale Berkowitz, MasterCard Foundation, gberkowitz@mastercardfdn.org
|
| Abstract:
Policymakers and funders increasingly ask "what information is needed to facilitate decisions about whether and how to expand promising new programs or policies to other places?" This panel will shed light on the range of issues evaluators can address to inform funders' replication and "scaling up" strategies. The session will discuss such questions as: what types of evidence are needed before replicating or scaling up a program; how might evaluators design evaluations to identify those program attributes that should be adapted to the differing social and political environments of the adopting organizations versus those attributes that form the unvarying core of the program; how can evaluators assess fidelity to the core program model; finally, once a program has been expanded or replicated how can evaluators measure its success? This panel will provide suggestions for ways to approach these issues and identify outstanding questions that should be explored in the future.
|
|
Scaling, Scale Up and Replication: A Call for a More Disciplined Discusstion
|
| Laura Leviton, The Robert Wood Johnson Foundation, llevito@rwjf.org
|
| Patricia Patrizi, Public Private Ventures, patti@patriziassociates.com
|
|
Public and private funders have advanced efforts to "scale up" with little conceptual clarity and without addressing the limits to what we know about the organizational, human and scientific factors that limit its effectiveness or appropriateness. The discussions have tended to accept scale-up as unquestionably good and while attending to some issues--the need for sufficient evidence before going to scale and the importance of fidelity to model--it has not sorted through the factors that affect capacity to reach more people with better services that can produce better outcomes. Some of these factors include: organizational and population variation, limits of single model approaches relative to these variations, necessary local adaptation, limits to generalizability and how scale up supports or interferes with practice improvement. The presentation should offer ideas about how we think about, deliver and evaluate efforts to scale up.
|
|
|
Addressing Challenges in Evaluating Innovations Intended to be Scaled
|
| Thomas Kelly, The Annie E. Casey Foundation, tkelly@aecf.org
|
|
Why are successfully evaluated and well-evidenced programs not taken to scale? Evaluations of innovations intended to scale need to be designed and conducted better and with more intention not only to our evaluation methods but also to the goal of increasing the utilization and applicability of the evaluation's findings to real practice in the field. Social and human service prevention and intervention programs are implemented not in controlled settings but in varying social and political environments that always require a degree of adaptation. Therefore, funders, policymakers, and nonprofits need more than excellent evidence of impact, they also need detailed implementation guidance, help with knowing what data on quality are important, and contingency plans for responding to real events. This presentation will focus on the necessary elements of evaluations capable of addressing not only the evidence of outcomes but also the data and knowledge needed to make practical decisions during replication.
| |
|
Replicating Innovative Program Models: What Evidence do we Need to Make it Work?
|
| Margaret Hargreaves, Mathematica Policy Research, mhargreaves@mathematica-mrp.com
|
| Beth Stevens, Mathematica Policy Research, bstevens@mathematica-mpr.com
|
|
"Scaling up" is partially a result of replication. Can an organization adopt or replicate) a new model? If not, could scaling up be achieved? How can evaluation contribute to answers to this question? What elements of knowledge, strategy, and local conditions need to be present in both the original organization and the organization replicating the model for successful replication to occur? The evaluation of the RWJF Local Funding Partnerships Program included case studies of four pairs of programs - the organizations that had developed innovative program models and the organizations that replicated them. These case studies reveal that the goal of most evaluations - - evidence of effectiveness, is only one of the elements that further the chances of successful replication. Diffusion of knowledge of the innovation, identification of appropriate candidates for replication, and the provision of technical assistance to transplant the innovation are also part of the process.
| |
|
Measuring the Capacity for Replication and Scale Up
|
| Lance Potter, New Profit Inc, lance_potter@newprofit.com
|
|
In order for interventions to effectively replicate and scale, implementing organizations must have the organizational conditions to support growth with fidelity. New Profit, Inc., a social venture fund, has participated in the successful scaling of many notable not-for-profit organizations. New Profit's approach includes use of a Growth Diagnostic Scale, which was developed over a decade of working to scale-up successful social interventions. This presentation will describe the Growth Diagnostic Scale and its value for assessing the capacity of an organization to replicate. The scale has applications for both funders and service providers seeking to assess where an organization sits on the scale from start up to program maturity, attempting to employ targeting intervention to improve program growth, and wishing to predict future problems for growing organizations based on their current organizational strengths and weaknesses.
| |