|
Session Title: Evaluation and Social Metrics in the Nonprofit Sector
|
|
Panel Session 842 to be held in Liberty Ballroom Section A on Saturday, November 10, 3:30 PM to 5:00 PM
|
|
Sponsored by the Non-profit and Foundations Evaluation TIG
|
| Chair(s): |
| Huilan Yang,
W K Kellogg Foundation,
hy1@wkkf.org
|
| Discussant(s):
|
| Victor Kuo,
Bill & Melinda Gates Foundation,
victor.kuo@gatesfoundation.org
|
| Abstract:
Are traditional approaches to program evaluation losing ground in foundations? What other methods are foundations employing to address questions of accountability and organizational learning? What, if anything, can evaluation practitioners do to deliver findings to foundations seeking timely information in a rapidly changing environment? This panel will attempt to address these questions by discussing the role and use of evaluation in the nonprofit sector, especially in private foundations, as well as the changes evaluation has been undergoing. The panelists will also explore the potential future of evaluation in foundations, with performance and social metrics gaining prominence in nonprofits as a context. Concrete examples will be given to enrich the conversation. The panel will conclude with questions, insights and comments from the audience in this subject.
|
|
Performance Metrics and Evaluation in Large Foundations
|
| Victor Kuo,
Bill & Melinda Gates Foundation,
victor.kuo@gatesfoundation.org
|
|
Over the last decade, foundations have experimented with various forms of evaluative activities. Basic outcome monitoring, referred to by some as 'performance metrics' or 'social metrics', have gained prominence in some large foundations. Dashboards and scorecards containing summaries of program outcomes, funding levels, exemplary projects, and critical contextual factors are featured. Interest in social metrics is a departure from traditional approaches, such as large, longitudinal studies. Foundation staff members have found themselves dissatisfied with these studies because of their format and lack of timeliness. Alternatives such as participatory evaluations and case studies have also been rejected. This presentation will offer examples of foundation developed 'social metrics' templates and will consider how they might be used in tandem with other types of evaluation activities. A framework for designing the evaluation function in large foundations using multiple evaluation approaches will be offered along with suggestions for evaluators on how to work with foundations.
|
|
|
Social Metrics for Accountability
|
| Jianping Shen,
Western Michigan University,
shen@wmich.edu
|
|
Foundations are conscious about social metrics for accountability. However, Social metrics for accountability are not easy to be defined in the foundation's world. Our experience illustrates that foundation staff are conscious about accountability and the metrics evolve with the development of the funded program. For example, the evaluation of Kellogg Foundation's Unleashing Resources Initiative was first on the fidelity of program implementation. Then, in consultation with the foundation staff, we added a component of the amount of resources unleashed and how the programming investment is related to the amount of unleashed resources. More recently, we added another component to evaluate the impact on communities and individuals' lives. Our evaluation of Kellogg Foundation's School-based Health Care Policy Program first focused on policy making activities, such as youth engagement and community mobilization. Now we have begun to tally policy achievements. We will start to evaluate the impact of policy achievements.
| |
|
Social Metrics for Organizational Learning
|
| Shao-Chee Sim,
TCC Group,
ssim@tccgrp.com
|
|
Social metrics can ease concerns and get buy-ins from various stakeholders as a 'learning' tool to build capacity among foundation staff and their grantees. Some concrete examples will be offered about how to frame social metrics in a learning context. Specifically, I will draw from two recent foundation experiences in developing scorecard and success measure framework. I will discuss how the development of social metrics has been helpful in clarifying grant-making strategies as well as short-term and long-term outcomes. More importantly, I will highlight how program staff, along with evaluation staff, can benefit from working together in building their collective knowledge and capacity of evaluation. I will also discuss how grantee organizations can benefit from this learning approach. By framing social metrics in a learning context, it brings evaluation officers closer to their program colleagues while ensuring that grantees and foundations are being held accountable for their work.
| |