|
Session Title: Use of Evaluation: Overcoming the Challenges
|
|
Panel Session 727 to be held in Texas A on Saturday, Nov 13, 8:00 AM to 9:30 AM
|
|
Sponsored by the Government Evaluation TIG
|
| Chair(s): |
| Joseph Wholey, University of Southern California, joewholey@aol.com
|
| Discussant(s):
|
| Joseph Wholey, University of Southern California, joewholey@aol.com
|
| Abstract:
Use of Evaluation: Overcoming the Challenges. The demand for evaluation information is growing. Congress, state legislatures, local legislative bodies, public agencies, foundations, nonprofit organizations, and other funding agencies are increasingly demanding information on how program funds were used and what those programs produced. Use of evaluation findings remains problematic, however.
In this panel, presenters will identify challenges that agencies and evaluators face in producing credible, useful evaluations as well as the challenges they face in getting evaluations used. Each presenter will then suggest ways to overcome the challenges, produce credible evaluations, and get evaluation findings used to support policy and management decisionmaking.
|
|
The Challenge of Producing Credible Evaluations
|
| Kathryn Newcomer, George Washington University, newcomer@gwu.edu
|
|
The Challenge of Producing Credible Evaluations: Kathryn Newcomer’s presentation will discuss fundamental elements that evaluators and organizations sponsoring evaluations should consider before undertaking any evaluation work, including how to: select programs to evaluate; match evaluation approaches to information needs; identify key contextual elements shaping the conduct and use of evaluation; produce the methodological rigor needed to support credible findings; design responsive and useful evaluations; and get evaluation information used.
|
|
|
The Challenge of Producing Useful Evaluations
|
| Harry Hatry, Urban Institute, hhatry@urban.org
|
|
Harry Hatry’s presentation will describe how pitfalls that may be encountered before and during evaluations may hinder the validity, reliability, credibility, and utility of evaluations. He will suggest how these pitfalls can be avoided and how advances in technology can contribute to credible, useful evaluation work. Hatry will then discuss the selection and training of evaluators, quality control of an organization’s entire evaluation process, and how to overcome challenges in getting evaluation findings used to improve programs and services.
| |
|
Contracting for Credible, Useful Evaluations
|
| James Bell, James Bell Associates, bell@jbassoc.com
|
|
James Bell’s presentation will describe how government agencies and other evaluation sponsors can procure needed evaluation services and products from appropriately qualified evaluation contractors. His advice will focus on five areas: creating a feasible, agreed-upon concept plan; developing a well-defined request for proposals (RFP); selecting a well-qualified evaluator team that will fulfill the sponsor’s intent; constructively monitoring interim progress; and ensuring the quality and usefulness of major evaluation products.
| |
|
Current Challenges in Performance Monitoring
|
| Theodore Poister, Georgia State University, tpoister@gsu.edu
|
|
Theodore Poister’s presentation will identify challenges in developing performance monitoring systems that add value – and then suggest that those trying to develop such systems should: clarify the purpose and intended uses of the monitoring system; build ownership by involving managers and decisionmakers; generate leadership to develop buy-in; identify “results owners”; delegate increased authority and flexibility in exchange for accountability for results; establish a regular process for reviewing performance data; initiate follow-up when persistent problems emerge from performance data; and monitor and improve the usefulness and cost-effectiveness of the monitoring system itself. After identifying an emerging challenge (that of monitoring the performance of programs that are delivered through networked environments), Poister will suggest several ways in which those developing monitoring systems in those environments can increase the likelihood that the monitoring systems will add value; for example, by working to develop consensus among key stakeholders regarding goals, measures, and data collection systems; encouraging networked systems to use available statistical data; using logic models to clarify relationships among different agencies’ activities, outputs, and outcomes; and incorporating performance data into grant processes, incentive systems, and recognition processes throughout the network.
| |