|
Assisting the Local Program Level with Performance Measurement
|
| Presenter(s):
|
| Leanne Kallemeyn,
Loyola University Chicago,
lkallemeyn@luc.edu
|
| Abstract:
A critical evaluation policy, the Government Performance and Results Act of 1993, has contributed to the growth of performance measurement. While some evaluators have criticized performance measurement, others have contributed to its development and incorporated it into their evaluations. The purpose of this paper is to further contribute to this conversation about how evaluators can engage performance measurement. I use a case example of how I, as an evaluator, assisted a local Head Start program in using mandated school readiness assessments. Based on the literature and case, I identified five areas for an evaluator to work within and challenge performance measurement: What is assessed, how it is assessed, how results are interpreted and used, what is the purpose, and what relationships are facilitated. I also provided example activities that I completed with the local program to illustrate how evaluators can assist local programs with performance measurement.
|
|
Practical Limitations to Gathering Data in Support of GPRA and PART
|
| Presenter(s):
|
| William Scarbrough,
Macro International Inc,
william.h.scarbrough.iii@macrointernational.com
|
| Herbert Baum,
Macro International Inc,
herbert.m.baum@macrointernational.com
|
| Renee Bradley,
United States Department of Education,
renee.bradley@ed.gov
|
| Andrew Gluck,
Macro International Inc,
andrew.gluck@macrointernational.com
|
| Abstract:
Providing the Office of Management and Budget with data to support a program’s Program Rating Assessment Tool (PART) represents an unfunded mandate for the program. As such, the program often can only gather data on a “shoestring” budget. Macro International has worked for 4 years with the Research to Practices Program, Office of Special Education Programs, US Department of Education, to develop program measures and collect data in support of these. In this paper we briefly describe how we have used expert panels to review material, and the challenges associated with this approach.
|
|
Partnership Contribution to Program Outcomes: Logic Models as Tools for Developing Measures for Partnerships
|
| Presenter(s):
|
| Valerie Williams,
RAND Corporation,
valerie@rand.org
|
| Lauren Honess-Morreale,
RAND Corporation,
laurenhm@rand.org
|
| Abstract:
Partnerships are a common strategy for many programs. Government programs, in particular, rely on partnerships because they offer the potential to deliver more effective programs as well as increased efficiency in producing and delivering outputs. Despite the ubiquitous nature of partnerships, there is very little information on developing measures to assess the value of partnerships, particularly as it relates to the contribution of partnerships to achieving program outcomes. In this paper, the authors describe the use of a logic model template as a means for developing measures that can be applied to partnerships. The logic model provides a framework for describing partnership inputs, activities, outputs and outcomes, and as such, serves as a basis for developing measures to assess 1) specific elements of partnerships; 2) the cost efficiency and effectiveness of the partnership; and 3) the extent to which “partnership,” rather than simply two entities working together, contribute to program outcomes.
|
| | |