|
Session Title: Assessing Evaluation Needs: Multiple Methods and Implications for Practice
|
|
Panel Session 465 to be held in Sebastian Section I4 on Friday, Nov 13, 9:15 AM to 10:45 AM
|
|
Sponsored by the Research on Evaluation TIG
|
| Chair(s): |
| Arlen Gullickson, Western Michigan University, arlen.gullickson@wmich.edu
|
| Discussant(s):
|
| Frances Lawrenz, University of Minnesota, lawrenz@umn.edu
|
| Abstract:
The Evaluation Center was recently funded by the National Science Foundation to provide evaluation-related technical assistance for one of its programs. In this project's first year, a comprehensive needs assessment was conducted to determine the types of evaluation support that were most needed by the program's grantees. In this panel session, each presenter will describe ways in which evaluation-specific needs were assessed. The first paper describes three distinct metaevaluation efforts. The second discusses how the questions posed for technical assistance were used as means for identifying needs. The last paper discusses how existing data on evaluation practices among the target audience were mined and combined with results from document reviews and interviews to provide another perspective on needs for evaluation support. Together, the panel offers a broad picture of how to assess needs and gives a rich lesson in how evaluators can better assist their clients and improve their practice.
|
|
Metaevaluation as Needs Assessment
|
| Lori Wingate, Western Michigan University, lori.wingate@wmich.edu
|
|
This presentation describes three metaevaluation studies conducted to identify needs for evaluation resources and training among evaluators of projects funded through a National Science Foundation program. For one study, thirty raters assessed the extent to which ten evaluations met the Joint Committee's Program Evaluation Standards. A second study, conducted by one of the evaluation discipline's top scholars, used a case study approach for in-depth study of evaluation practices at two sites. A third study conducted by another evaluation expert was based on interviews and document reviews to assess evaluation efforts at three sites. The collective results are being used showcase exemplary practices and guide capacity-building efforts in aspects of evaluation that are determined to be in need of improvement. The presentation illuminates different ways in which evaluation efforts can be evaluated and demonstrates how summative metaevaluations can serve a formative function to assess needs and build capacity in evaluation.
|
|
|
Listening to Needs: How Requests for Evaluation Assistance Can Teach Us How to Be Better Evaluators
|
| Stephanie Evergreen, Western Michigan University, stephanie.evergreen@wmich.edu
|
|
As a National Science Foundation-funded evaluation technical assistance provider, The Evaluation Center receives numerous requests from project directors who need help evaluating their grant activities. This paper will review the most Frequently Asked Questions (FAQs) and examine them for their implications for evaluators from all backgrounds. The FAQs reveal gaps in the way evaluators communicate with clients, present their plans and work products, and make themselves known to potential clients. In general terms, evaluation clients are expressing a need for greater support in the "softer" skills, rather than more research-oriented tasks like analyzing data. For example, a question about how a project director should balance the roles of internal and external evaluators can provide insight to evaluators about places to be more proactive with a client. The presenter is a research associate at The Evaluation Center who provides technical assistance and creates resources to address evaluation needs.
| |
|
Comparison of Evaluation Use and Organizational Factors as Needs Assessment
|
| Amy Gullickson, Western Michigan University, amy.m.gullickson@wmich.edu
|
|
Regular evaluation is a requirement for projects and centers that receive National Science Foundation (NSF) funding. However, making evaluation a required activity does not mean that it will be used. This paper reports on a study based on ten years of survey data conducted by The Evaluation Center, including information on needs assessment and evaluation practices. The data were used to select cases of NSF's Advanced Technological Educational program grantees who described evaluation either as 'essential' or 'not useful' to their work. Review of the reports and activities from these cases revealed information about the purpose and quality of evaluation at each site. Follow-up interviews with the selected sites explored actual use (or non-use) of reports and possible influences including organizational culture and relationship with the evaluator. The comparison of 'not useful' and 'essential' cases identified factors that inhibit and enable the use of evaluation for program improvement.
| |