|
Session Title: Extended Learning: A Conversation Among Evaluators of the National Science Foundation (NSF) Extension Services Projects
|
|
Panel Session 787 to be held in Lone Star F on Saturday, Nov 13, 10:55 AM to 12:25 PM
|
|
Sponsored by the Cluster, Multi-site and Multi-level Evaluation TIG
|
| Chair(s): |
| Beverly Farr, MPR Associates Inc, bfarr@mprinc.com
|
| Abstract:
This panel will include a group of evaluators from the National Science Foundation (NSF)-funded Gender in Science and Engineering
(GSE)Extension Services Grants. Extension Service projects present unique challenges to evaluation because they “extend” services across tiers or layers of service, and their most direct strategies are often far removed from the ultimate target outcomes. From this basic dilemma faced in evaluating these projects, the panel will use a Question and Answer discussion format to delve into a range of issues that characterize the evaluation challenge. The panel members will pose questions to each other and discuss ideas and strategies for meeting the challenges and emphasize the value of establishing a community of practice for those undertaking evaluation of multi-site and multi-level projects.
|
|
Can We Really Do It All? Yes... Within Reason
|
| Elizabeth Bachrach, Goodman Research Group Inc, bachrach@grginc.com
|
|
Between every project and its evaluation, there is a unique and dynamic working relationship. However, regardless of program content, evaluators share a primary goal of designing and implementing a feasible and practical evaluation plan that will address the key research questions and follow our established guiding principles. When a project, by nature, is multi-layered and aims to reach multiple audiences via various levels of service, such as those under the NSF GSE Extension Service grants, the evaluation must include room to evolve and stretch with the project. With this in mind, this part of the session addresses the question, "How do we keep the evaluation practical and feasible within multi-level projects?"
|
|
|
Use of Technology in Evaluation: How Does It Help and How Does It Hinder?
|
| Vicky Ragan, Evaluation and Research Associates, vragan@eraeval.org
|
|
Most, if not all, GSE Extension projects are supported by technology. The technologies used in these projects are varied in how they address goals and in how they support the evaluation. Within the evaluation process these technologies may provide data collection, tracking of project processes, and dissemination of evaluation result and reports. At a higher level, they may serve as a mechanism to translate,transfer, and diffuse knowledge. The panelists recognize the relevance of examining the role of technology in these multi-level projects where its use is an integral part of the project. This part of the session addresses the question, “How is technology used in multi-level projects and how does that relate to the evaluation?”
| |
|
Finding Common Ground: Is It possible?
|
| Beverly Farr, MPR Associates Inc, bfarr@mprinc.com
|
|
The projects included in the NSF Extension Services Grants Program all have the ultimate goal of increasing female participation in STEM course taking and STEM careers. They vary, however, in the levels they address--from state departments to community colleges to schools--and in the strategies they use to achieve their objectives. Nonetheless,they were funded to provide services that would build the capacity of their recipients and reach toward the ultimate goal. The activities of the projects cannot always be directly linked to the ultimate goal,however, and there is a need to examine intervening outcomes to assess the impact of the projects overall. As the funder,NSF has a desire to know what the projects together contribute to the accomplishment of the ultimate goal. With this in mind, the evaluators began discussions about establishing common indicators, and this part of the session
addresses the question, “What is the value of establishing common
indicators across projects?”
| |
|
Strategies for Guiding and Tracking Sustainability: Will It Last?
|
| Donna Brock, Evaluation Consulting Services Inc, djbrock.ecs@cox.net
|
|
Evaluators play a role in guiding and tracking program sustainability. Mancini and Marek (2004) and Marek, Mancini, and Brock (2003) identify “Demonstrating Program Results” as one successful sustainability factor. This factor includes: 1) evaluation plans developed prior to program implementation, 2) regularly conducted evaluation, and 3) the use of evaluation to inform program modifications. However, tracking and guiding effective sustainability practices necessitates a larger lens that considers the macro-system of program function (e.g.,leadership, collaboration, funding, staff involvement, project adaptability, and fidelity to project implementation) and the community. Thus, evaluators need to integrate various streams of data to inform the sustainability process. Evaluators of these particular Extension Service projects provide a unique perspective into this process given the multi-layers, constantly changing contexts, and challenges to measuring impacts and tracking fidelity. This part of the session will address the question, "What is the role of the evaluator in facilitating sustainability?"
| |