|
Pushing Buttons and Breaking Code: Evaluating Instructional Technology Pilots in Higher Education
|
| Presenter(s):
|
| Joel Heikes, University of Texas, Austin, joel.heikes@austin.utexas.edu
|
| Stephanie Corliss, University of Texas, Austin, stephanie.corliss@austin.utexas.edu
|
| Erin Reilly, University of Texas, Austin, erin.reilly@austin.utexas.edu
|
| Abstract:
Introducing a new instructional technology into any learning environment is a complex process involving many stakeholders who often have highly divergent interests—technicians wanting technology to work perfectly, administrators seeking cost savings, instructors demanding efficiency, and students expecting easy use. Add into this mix the evaluator’s desire to produce a quality evaluation and the potential for stressful outcomes is heightened. Based on our experiences evaluating instructional technology pilots over the past six years, we identify some common challenges, stunning successes, and lessons learned to develop a model for conducting quality instructional technology evaluation in a higher education context.
|
|
Real World, Real Use: The Impact of Integrating Student-Centered Learning in Adult Online Instruction in Mathematics and Science
|
| Presenter(s):
|
| Jane A Rodd, State University of New York at Albany, jr937855@albany.edu
|
| Dianna L Newman, State University of New York at Albany, dnewman@uamail.albany.edu
|
| Patricia J Lefor, Empire State College, pat.lefor@esc.edu
|
| Abstract:
This paper documents evaluation findings on the integration of constructivist methods into technology-supported distance learning with adult learners in the fields of mathematics and science. The overarching goal of the project was to promote content relevancy by creating authentic learning experiences for students. To achieve this goal, selected existing online courses were modified, and new courses were developed to reflect a more problem-based approach to learning relevant to students’ lives and careers. A multi-phased, mixed-methodology evaluation design was developed and utilized to support the objectives, which were to evaluate ability to serve diverse students, changes in course related affect, and changes in course related content knowledge. Participants were 1458 (Math, n=938; Science, n=520) adult learners enrolled in mathematics and science courses offered on-line to under-graduate students at a 4-year public college. Results indicated that students made significant gains in content, transfer of content, content-specific affect, and generalized learning affect.
|
|
Evaluation of an Online Training Program for Informal Science Educators: The FETCH! with Ruff Ruffman Program
|
| Presenter(s):
|
| Christine Paulsen, Concord Evaluation Group LLC, cpaulsen@concordevaluation.com
|
| Christopher Bransfield, Concord Evaluation Group LLC, cbransfield@concordevaluation.com
|
| Abstract:
This paper describes an evaluation of an online training program developed with National Science Foundation funding (WGBH’s FETCH! with Ruff Ruffman). The program was designed for informal educators leading science activities with elementary-age kids (including after-school providers, teachers, camp counselors, librarians, and museum staff). We used random assignment in the treatment-control group, pre- and post-test design. Fifty-four programs from across the country participated in the study. Findings show that the program helped leaders to be more prepared and more comfortable leading hands-on science activities with kids; it enhanced leaders’ ability to convey science concepts and processes; and it enhanced leaders’ ability to engage kids and get them excited about doing science activities. This paper will present a summary of key findings regarding program impact, as well as a summary of methodological lessons learned, with a discussion of the implications for other evaluators who are performing evaluations in informal learning environments.
|
| | |