Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Federal Evaluation Policy: Impact on Funding Priorities, Evaluation Research and Practice for Broadening Participation in Science, Technology, Engineering and Mathematics (STEM)
Panel Session 401 to be held in Centennial Section F on Thursday, Nov 6, 4:30 PM to 6:00 PM
Sponsored by the Multiethnic Issues in Evaluation TIG
Chair(s):
Elmima Johnson,  National Science Foundation,  ejohnson@nsf.gov
Abstract: Current Federal evaluation policy has its roots in both the current political emphasis on accountability and scientific rigor as well as budget realities. The impact is felt, not only in agency funding priorities, but also in evaluation research and practice. This session will examine the impact of federal legislation such as the No Child Left Behind Act on educational program evaluation activities and initiatives. Attention will be given to how policy has influenced evaluation capacity building and evaluation practices for supporting underrepresented groups. Questions to be addressed include: (1) what challenges do scientific rigor pose for the evaluation of educational initiatives and opportunities for underrepresented groups and (2) what should be the role of evaluation in advancing the knowledge base on broadening participation, while addressing accountability requirements. Additionally the presentation will outline a comprehensive approach for assessing value, rigor and impact of education programs focused on diversity and equity.
Federal Evaluation Policy in STEM: A Historical Perspective
Elmima Johnson,  National Science Foundation,  ejohnson@nsf.gov
This presentation will cover how policy has influenced evaluation capacity building and evaluation practices for supporting underrepresented groups. NSF's Broadening Participation activities will be highlighted.
Metrics for Monitoring Broadening Participation Efforts
Toni Clewell,  Urban Institute,  tclewell@ui.urban.org
This presentation will focus on the identification of program monitoring metrics, indicators for program evaluation, and rationale for their use.. The issue is what represents an acceptable, valid, and sufficient set of indicators applicable across groups of programs by which to measure progress in broadening participation. .The presenter has a wealth of experience in the identification and analysis of BP efforts in STEM education as well as the identification of metrics to measure outcomes across projects.
Designs and Indicators for Program Evaluation
Bernice Anderson,  National Science Foundation,  banderso@nsf.gov
This presentation will focus on which study design options are appropriate under which sets of circumstances. More specifically, of the various assessment methods available, each has a certain utility and appropriateness under a defined set of circumstances. The presenter for this session will draw on their expertise to define the context of NSF's portfolio of programs.

 Return to Evaluation 2008

Add to Custom Program