| Session Title: Exploring Evaluation Expectations for Nonprofits, Foundations and Government: A Preview of an Upcoming New Directions for Evaluation Volume |
| Multipaper Session 739 to be held in Liberty Ballroom Section A on Saturday, November 10, 10:30 AM to 12:00 PM |
| Sponsored by the Non-profit and Foundations Evaluation TIG |
| Chair(s): |
| Joanne Carman, University of North Carolina, Charlotte, jgcarman@uncc.edu |
| Discussant(s): |
| Kimberly Fredericks, Indiana State University, kfredericks@indstate.edu |
| Abstract: In recent years, many evaluators have observed that nonprofit organizations are under increasing pressure to demonstrate their effectiveness and document their program outcomes, as the current political and funding environment continues to stress the importance of accountability and measuring performance. Foundations, government agencies, and other funders are asking nonprofit organizations for more evaluation and performance measurement data. Yet, most nonprofit organizations continue to struggle with these demands, and many lack the capacity to implement evaluation and performance measurement in comprehensive or meaningful ways. This panel will bring together the authors of an upcoming NDE volume to discuss the current state of evaluation practice among nonprofit organizations, as well as the different expectations for nonprofit evaluation from the perspective of nonprofit leaders and various types of funders, including foundations, the federal government, and the United Way. The authors will discuss their findings and offer recommendations for reconciling these differing expectations. |
| Nonprofits and Evaluation: Empirical Evidence From the Field |
| Joanne Carman, University of North Carolina, Charlotte, jgcarman@uncc.edu |
| Kimberly Fredericks, Indiana State University, kfredericks@indstate.edu |
| During the last fifteen years, nonprofits have faced increasing pressures from stakeholders to demonstrate their effectiveness, document program outcomes, and improve their accountability. The purpose of this paper is to provide an empirical description about what evaluation practice looks likes among today's nonprofit organizations and to provide empirical evidence of current thinking and practice within the field. This paper presents a review of the empirical data that has been gathered and published within the literature about the current program evaluation practices of nonprofit organizations; presents descriptive data about evaluation practice gathered from a large mail survey of nonprofit organizations; and discusses a three-pronged typology about what nonprofit organizations think about evaluation, based upon a factor analysis of questions from the survey data. Finally, the paper offers recommendations about how evaluators can work with each type of organization to expand their thinking about evaluation and build evaluation capacity. |
| Nonprofits and Evaluation: Managing Expectations From the Leader's Perspective |
| Sal Alaimo, Indiana University, salaimo@comcast.net |
| The call for greater accountability in the nonprofit sector continues to impact funders, board members, individual donors, evaluators and executive directors of nonprofit organizations. While this call has predominantly been framed around issues of fiscal responsibility, programmatic responsibility through evaluation has garnered an increasing role in an organization's overall accountability. Funders, accreditation organizations, government agencies and other stakeholders are increasingly asking for nonprofit organizations to provide information about the effectiveness of their programs. This request or requirement comprises an external pull emanating from the organization's stakeholders. Variance in expectations from these stakeholders results from different perceptions, beliefs, norms and parameters of their relationships with nonprofit organizations. This analysis uncovers whether leaders respond to the external pull of their important stakeholders, help drive an 'internal push' for evaluating programs where it is an intrinsic value embedded in their organization's culture, or attempt to balance the external pull with the internal push. |
| Foundations' Expectations (and Capacity) to Support, Conduct, and Use Evaluations |
| Thomas Kelly, Annie E Casey Foundation, tkelly@aecf.org |
| The philanthropic sector has always included a variety of types of donors and endowments with many different grant making approaches and expectations across community foundations, private grant making foundations, private operating foundations, and other endowments. Within the past 10 years, we have seen an expansion both in the numbers and types of foundations and in the approaches they take to grant making. The diversity in structure, mission, and operations within the sector mirrors the diversity in approaches to and expectations for evaluation. Most importantly, foundations and other funders have affected the practice of evaluation, including its vocabulary, budgets, and methodology, through the expectations they communicate via their funding behaviors and requirements. This paper addresses these recent developments that have raised or altered expectations for the purposes and uses of evaluation as well as focus on the challenges that foundations face in implementing relevant evaluations. |
| Evaluation and the Federal Government |
| David Introcaso, United States Department of Health and Human Services, david.introcaso@hhs.gov |
| This paper will provide an overview of the current state of evaluation and evaluation practice methods within the executive branch of the federal government. The paper will: discuss current expectations for and results from evaluating government program performance; discuss inherent problems or assumptions associated with performance expectations; and, describe specifically how the field is currently being practiced across various federal agencies. A main issue of focus will be on the implementation of the Office of Management and Budget's Program Assessment Rating Tool (PART). The PART's rationale and inherent assumptions will be examined and will be challenged by arguing that this model of planning frequently or largely conflicts with lived experience or with a complex, political environment where high levels of goal ambiguity are inherent. Suggestions for what can be done to avoid this trap or better align planning expectations with performance will be discussed. |
| United Way Experiences in Measuring Program Outcomes and Community Impact |
| Michael Hendricks, Independent Consultant, mikehendri@aol.com |
| Margaret Plantz, United Way of America, meg.plantz@uwamail.unitedway.org |
| Over the past ten years, the United Way (UW) system has been an influential force for greater evaluation and accountability within the nonprofit sector. More than 400 local United Ways (UWs) have helped approximately 19,000 local nonprofit agencies to identify the outcomes they intend for their clients to achieve as a result of their services, measure those outcomes, analyze the results, and use that information to improve their effectiveness and to document their accomplishments. This influence has spread beyond the UW system itself through the use of the popular manual Measuring Program Outcomes: A Practical Approach. Despite this overall success, however, local UWs and agencies still face undeniable challenges in measuring outcomes and using the findings. It requires skills not readily available, time and energy not always present, and local funders use different concepts and reporting requirements. Making funding decisions based on outcome data can be difficult for UW staff and allocations volunteers. This paper discusses these issues and presents future directions in the field. |