|
Session Title: Evaluation Quality From a Federal Perspective
|
|
Panel Session 371 to be held in BOWIE B on Thursday, Nov 11, 4:30 PM to 6:00 PM
|
|
Sponsored by the Multiethnic Issues in Evaluation TIG
|
| Chair(s): |
| Elmima Johnson, National Science Foundation, ejohnson@nsf.gov
|
| Abstract:
“Evaluation Quality” has been identified as the conference theme with a focus on its conceptualized and operationalization. Another area of importance is evaluation utilization.
This panel will discuss the definitions of evaluation and “new ways of thinking about the systematic assessment of our evaluation work” from a Federal perspective, i.e., the National Science Foundation (NSF), its grantees and contractors and the US Government Accountability Office (GAO). The utilization of a contextual/cultural perspective will be woven throughout the discussions of the various evaluation mechanisms described.
|
|
Evaluation for Science, Technology, Engineering and Math Education (STEM) Education Research and Development
|
| Bernice Anderson, National Science Foundation, banderso@nsf.gov
|
|
This presentation will focus on issues of quality in evaluation planning for STEM education research and development programs. It will also address the challenges of evaluation quality within the context of implementation programs compared to intervention projects. These insights about evaluation quality will be drawn from recent capacity building and management strategies for the planning and oversight of selected STEM education evaluations of research and development efforts funded by the National Science Foundation in response to Administration's call for a culture of learning and strong evidence of results of the federal investment.
|
|
|
National Science Foundation Committee of Visitors: Evalaution by Experts
|
| Fay Korsmo, National Science Foundation, fkorsmo@nsf.gov
|
| Connie Kubo Della-Piana, National Science Foundation, cdellapi@nsf.gov
|
|
Each grants program at the National Science Foundation is reviewed by an external Committee of Visitors every three or four years. Applying a common set of criteria, Committees of Visitors review (a) decision processes leading to awards or declinations of research and education proposals and (b) program management. Program managers respond to the Committee of Visitors determinations, and both the Committee of Visitors reports and the program responses are made available to the public. Does the use of Committees of Visitors lead to quality evaluation? According to Averch (1994), validity of expert judgment in program evaluation is based on acceptance of expert judgment about a program and action taken based on the judgment. As a result of the action taken, benefits are realized or costs are avoided. This presentation examines Committee of Visitor reviews in light of the heightened demand for high-quality evaluation of government programs
| |
|
Evaluation Quality: Threats and Solutions
|
| Clemencia Cosentino de Cohen, Urban Institute, ccosentino@urban.org
|
|
Evaluation rigor and quality are central to the validity of evaluation findings on which important funding and programmatic decisions are made, but researchers often face constraints that require adjustments that may threaten the quality of their work or the rigor of their designs. In this presentation, I will identify and discuss some solutions to these “threats” as well as the window of opportunity that may be created in the process and yield advances in evaluation research. Specifically—relying on evaluations completed, ongoing, and currently being designed—I will discuss three common threats to quality: cross-sectional versus longitudinal data, confidentiality-driven restrictions on information (FERPA), and changing policy environment (using broadening participation programs as an illustration). In so doing, I will discuss how monitoring data collections and portfolio evaluations (based on strategies employed across projects, rather than individual project evaluations) may present viable solutions to the constraints just mentioned.
| |
|
Comparing Quality Standards in Audit and Evaluation
|
| Valerie Caracelli, United States Government Accountability Office, caracelliv@gao.gov
|
|
Today, in government, there is a resurgence of interest in evaluation (see 2011 Budget Perspectives and AEA’s EPTF website) and a concomitant responsibility to provide warranted conclusions on program results. In Evaluation, The Joint Committee will issue the 3rd edition of the Program Evaluation Standards which evaluators use to inform and improve their practice. Evaluators positioned in government, e.g., GAO, IG communities and elsewhere must follow the Government Auditing Standards, the “Yellow Book,” now being updated. This presentation will juxtapose the standards to discuss the values of both professions and the prominence given to particular facets of quality, such as, independence, cultural responsiveness, significance, transparency, among others. In specific instances a conceptual framework used to define how to meet the standard will be discussed. Last, the presentation examines how the performance audit and evaluation community ultimately assure that the standards of practice are being followed via meta-evaluation and peer review.
| |
|
Assessment of Federal Contractor Evaluation Services
|
| Elmima Johnson, National Science Foundation, ejohnson@nsf.gov
|
|
In accordance with Federal Acquisition Regulation (FAR) Subpart 42.15, Contractor Performance Information, Federal agencies are required to prepare evaluations of contractor performance for each contract in access of $100,000 at the time the work is completed and interim reports for contracts exceeding one year. In compliance with the FAR, NSF contracting officials complete a standard contractor performance report, which solicits a rating (unsatisfactory to outstanding) and comments in areas including Quality of Product/Service, Customer Satisfaction, Contractor Key Personnel, Timeliness of Performance and Cost Control. (This assessment is more in line with a fiscal audit and does not attend to outcomes or implications for future Federal actions in the program area being evaluated.)
This presentation will discuss the role of this report in the portfolio of evaluation assessment tools utilized by the NSF regarding its definition of evaluation quality and the context in which the information provided is utilized.
| |