|
Session Title: Managing Quality Through the Stages of Science, Technology, Engineering and Mathematics (STEM) Educational Evaluation
|
|
Panel Session 125 to be held in BONHAM C on Wednesday, Nov 10, 4:30 PM to 6:00 PM
|
|
Sponsored by the Pre-K - 12 Educational Evaluation TIG
|
| Chair(s): |
| Leslie Goodyear, National Science Foundation, lgoodyea@nsf.gov
|
| Abstract:
The Program Evaluation Standards and the AEA Guiding Principles represent overarching goals to foster quality in evaluation studies. Nonetheless, different contexts shape the concept of quality during the stages of a program evaluation. Consistently during evaluation studies, evaluation managers make methodological choices to resolve challenges and maintain quality. This panel will discuss a collection of challenges faced by education program evaluators at different evaluation stages: planning, data collection, data analysis, and reporting. Real-world challenges, such as sensitivity to educational settings (formal, informal, and afterschool), balancing partner needs, multisite logistics, data burden on participants, and use in reporting are situated within The Program Evaluation Standards and illustrated using STEM education evaluation case examples. Challenges presented will be balanced with successful strategies and lessons learned from practice.
|
|
Planning for Quality: Balancing Partner Needs in a Multi-site Evaluation of Science Education
|
| Ardice Hartry, University of California, Berkeley, hartry@berkeley.edu
|
|
During the planning stages, evaluators are often faced with the challenge of incorporating the requirements and expectations of multiple stakeholders. This presentation considers issues such as balancing the public-policy related needs of an advocacy organization with the goals of rigorous and objective research; ensuring that the research and evaluation questions can be addressed within the scope of work; and planning for data collection across a wide range of sites. Through the description of an initiative to collect data on the state of science education in California, which has just completed the planning stages, the presentation will discuss the specific challenges that were faced during the planning stages, and how solutions, such educating stakeholders in evaluation through the use of logic models, were sought that balanced the need of the many partners. The presentation concludes with a discussion of planning the data collection timeline, which provides context for the next presentation.
|
|
|
Relation of Data Quality and Participant Burden in a Science-centered Leadership Evaluation Design
|
| Juna Snow, University of California, Berkeley, jsnow@berkeley.edu
|
|
An illustrative case will be shared in which an evaluator began to lead and manage an in-progress evaluation design during the data collection stage. Specific challenges the evaluator will discuss relate issues of data quality and participant burden. A refined design emerged after examining the issues and returning to the question: How will this evaluation be used?
| |
|
Not My Plan: Executing Someone Else’s Analysis Design While Balancing Partner Needs and Communicating Useable Findings
|
| Ellen Middaugh, University of California, Berkeley, ellenm@berkeley.edu
|
|
Nationally funded research and development studies often require researchers and external evaluators to collaborate on data collection to serve multiple goals. Often, these partners will share the burden of data collection and may make use of the same data. However, their priorities for data collection and analysis may not coincide. Involving an internal evaluator can help provide continuity of data quality for both partners and promote utilization of evaluation findings by program or curriculum developers. This presentation will discuss two examples where an internal evaluator joined a project during the analysis and reporting phase. The following challenges, and accompanying strategies, will be discussed: (1) establishing and renegotiating spheres of responsibility, (2) communicating what the design can and cannot deliver to the client, (3) verifying and trouble-shooting data quality, and (4) translating findings.
| |
|
Attention to Quality in Reporting Evaluation Findings in STEM Education Programs
|
| Bernadette Chi, University of California, Berkeley, bchi@berkeley.edu
|
|
The Program Evaluation Standards (PES) and AEA Guiding Principles describe several aspects that infer quality in evaluation that relate to report clarity (PES U5), timeliness and dissemination (PES U6) and full disclosure of findings (PES P6). However, while these elements are important for evaluators to address, there are additional criteria that may reflect the needs and interests of clients and stakeholders that define quality evaluation products for them, including a particular focus on various stakeholders; opportunities to adapt data analysis and evaluation questions; and the opportunity to review and edit report drafts for accuracy. This presentation will discuss examples of reporting from three STEM evaluation studies across different education settings that utilized different reporting formats but a similar process to provide reports to internal clients; formative evaluation reports to external clients; and summative evaluation reports that were deemed useful and meaningful.
| |