|
Challenges and Strategies in Evaluating Large Center Grants
|
| Presenter(s):
|
| Judith Inazu, University of Hawaii at Manoa, inazu@hawaii.edu
|
| Abstract:
Federal funding agencies are increasingly requiring external evaluations of large, research and educational center grants. Many of these federally-funded centers encompass multiple institutions, require synergistic integration of research and education, and mandate outreach initiatives with an emphasis on increasing diversity. Often, staff in these directorates can provide little guidance regarding the evaluation since they themselves have had little training in evaluation. This paper discusses the challenges to evaluators posed by these large center grants and strategies that have been used to address these challenges. Some of the challenges include working with diverse populations at multiple institutions; measuring macro-level concepts such as collaboration, sustainability, and system changes; assessing the center's multiple missions; researchers' lack of familiarity with evaluation; and metrics for diversity. Some of the strategies adopted to meet these challenges include focusing on institutional case histories, collaboration maps, mass internet surveys, interviews with institutional leaders, and mining institutional databases.
|
|
Evaluation in Multi-Level Governance Settings
|
| Presenter(s):
|
| Thomas Widmer, University of Zurich, thow@ipz.uzh.ch
|
| Abstract:
This paper discusses the issues of evaluating in settings where many levels are involved and where the intervention mode is more shaped by negotiation than by hierarchy. The paper presents first recent developments in public policy which are responsible for the trend towards multi-level governance. In order to understand better these kinds of settings, a set of typical characteristics is elaborated in the paper. Topics like multiplicity and volatility of goals, inter-level transparency and trust are in the centre of the discussion. Based on experiences from evaluations in various fields such as public health, environmental education and sustainable development, the appropriateness of evaluation approaches, conceptions, methods and instruments in such settings are discussed. A special emphasis lies herewith on ethical considerations involved in evaluating multi-level governance. The paper closes with some recommendations on how to improve quality (in a broad sense) of evaluation in multi-level governance settings.
|
|
Building Relationships: Partnerships between State and Local Governments and State Universities in the Time of Evaluation
|
| Presenter(s):
|
| Virginia Dick, University of Georgia, vdick@cviog.uga.edu
|
| Melinda Moore, University of Georgia, moore@cviog.uga.edu
|
| Abstract:
Increasingly government agencies at all levels are facing requirements for extensive evaluations of programs and services. These requirements are coming from all funding sources - government and foundation. Often, the agency lacks the resources to adequately address all of the evaluation requirements without external support. In addition, often the requirement dictates the necessity of an external evaluator. This is where building relationships between state and local governments and state colleges and universities can meet important needs for both groups. This presentation will focus on how faculty and institutions can work with local and state governments to provide the expertise to support evaluation efforts for funded programs, services and collaborations. Examples from real world programs and projects will be used to explore the various issues, challenges and strengths related to building these relationships.
|
|
Starting Over in the Middle: Program Evaluation in an Era of Accountability
|
| Presenter(s):
|
| Maliika Chambers, California State University East Bay, maliika.chambers@csueastbay.edu
|
| Abstract:
Federal partnership grants present a unique challenge for evaluators in that the multiple accountability relationships can significantly impact the purpose and quality of the program evaluation. Recent literature in the field of evaluation examines the how pressures of accountability can shape performance measurement into a tool for monitoring, rather than serving a goal of program improvement, and highlights key points of analysis in these settings.
In this article, challenges of program evaluation under pressure are described in conjunction with the methods used to document the evidence of program impact. The author illustrates how striking a balance between the roles of researcher and evaluator to asking the right questions and sharing the load to make project evaluation and accountability everyone's business was a critical turning point in the overall success and effectiveness of the project evaluation. Evaluation models from the literature are presented, and suggestions for further research are offered.
|
| | | |