Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Real-Time Evaluation in Real Life
Panel Session 380 to be held in Room 105 in the Convention Center on Thursday, Nov 6, 3:35 PM to 4:20 PM
Sponsored by the Non-profit and Foundations Evaluation TIG
Chair(s):
Gale Berkowitz,  David and Lucile Packard Foundation,  gberkowitz@packard.org
Discussant(s):
Bernadette Sangalang,  David and Lucile Packard Foundation,  bsangalang@packard.org
Abstract: Real-time evaluations aim to support ongoing learning and strategy development. These evaluations regularly facilitate opportunities for learning to occur and bring evaluation data in accessible formats to the table for reflection and use in decision making. They use evaluation process and data to identify what about a program or strategy is or is not working and to identify midcourse corrections that ultimately can lead to better outcomes. While evaluation to support real-time learning and strategy sounds good in theory, it can be difficult to achieve successfully in practice. This session will examine the experiences of two real-time evaluations designed to support long-term grant making programs within the David and Lucile Packard Foundation. Presenters will reveal how their evaluations were designed (including specific examples of evaluation process, methods, and data), and will share what they've learned including some mistakes about using this approach in real life.
Evaluation as Integral to Program Design
Lande Ajose,  BTW Informing Change,  lajose@btw.informingchange.com
Initiatives that are in the early stages of program design and planning can benefit greatly from real-time evaluation because it provides a continuous feedback loop for refining strategy and clarifying objectives. Such has been the case with the David and Lucile Packard Foundation's grant making program focused on increasing the quality of and access to after-school programs in California shaped by Proposition 49, a measure mandating that $550M be set aside by the state annually for after-school programs. Contrary to the conventional wisdom, conducting an evaluation for programs and strategies in the design phase requires evaluators to give up maintaining a 'critical distance' and instead function as partners in program design and conveners of others to form a learning community. This session will explore how evaluators can operate both as insiders and outsiders as programs and strategies unfold and still emerge with credible evaluation findings.
Evaluation to Support Advocacy Strategy and Learning
Julia Coffman,  Harvard Family Research Project,  jcoffman@evaluationexchange.org
Real-time evaluation can be particularly useful for advocacy and policy change efforts that evolve without a predictable script. To make informed decisions, advocates need timely answers to the strategic questions they regularly face and evaluation can help fill that role. Five years ago, the David and Lucile Packard Foundation established a grant making program to achieve an ambitious policy goal voluntary quality preschool for all three and four-year olds in California by 2013. Because the Foundation knew from the program's start that the process for achieving this goal would unfold without a predictable script, they invested in an evaluation conducted by Harvard Family Research Project that emphasized continuous feedback and learning. This session will describe how the evaluation was designed to support Foundation learning and strategy development (using some new and innovative methods created specifically for this purpose), and how and why the design has evolved over time.
Real-time Evaluation to Inform Strategic Grantmaking
Arron Jiron,  David and Lucile Packard Foundation,  ajiron@packard.org
Bernadette Sangalang,  David and Lucile Packard Foundation,  bsangalang@packard.org
Philanthropy has no absolute measure for success. The David and Lucile Packard Foundation's approach to evaluation is guided by three main principles: (1) Success depends on a willingness to solicit feedback and take corrective action when necessary; (2) Improvement should be continuous, and we should learn from our mistakes; and (3) Evaluation should be conducted in partnership with those who are doing the work in order to maximize learning and minimize the burden on the grantee. At the Packard Foundation, there has been a shift generally from evaluation for proof ("Did the program work") to evaluation for program improvement ("What did we learn that can help us make the program better?"). This session will describe how evaluation fits in the entire grantmaking cycle, and discuss our experience with both evaluations from the Foundation's perspective.

 Return to Evaluation 2008

Add to Custom Program