Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Methods and Models in Evaluating Educational Technology
Multipaper Session 428 to be held in Room 109 in the Convention Center on Thursday, Nov 6, 4:30 PM to 6:00 PM
Sponsored by the Distance Ed. & Other Educational Technologies TIG
Chair(s):
Saul Rockman,  Rockman et al,  saul@rockman.com
Improving Websites with Usability Testing
Presenter(s):
Michael Lambur,  Virginia Polytechnic Institute and State University,  lamburmt@vt.edu
Abstract: Usability testing is a means for determining how well people use something, a website in this case, for it intended purpose. This evaluative process involves observing people using a website in as realistic a situation as possible to discover errors and areas of improvement. Conducting a usability test of a website can be invaluable in improving its functionality and ultimately its purpose for users. This presentation will walk participants through the process of usability testing and will focus on: 1) deciding what needs to be tested, 2) determining how many users to involve in the test, 3) identifying tasks that will be performed by the user in the test, 4) developing questions that will be asked of the users before and/or after the test, 5) deciding whether to use unobtrusive or obtrusive observation, and, 6) preparing observer guidelines. Experience gained from website usability testing with the eXtension initiative (http://www.extension.org) will also be shared.
Some Reasons for Incorporating Mixed-Methods Designs when Evaluating the Efficacy of Educational Learning Tools
Presenter(s):
Dane Christian,  Washington State University,  danechristian@mail.wsu.edu
Michael Trevisan,  Washington State University,  trevisan@wsu.edu
Angela Oki,  Washington State University,  info@currentconceptions.com
Phil Senger,  Current Conceptions Inc,  info@currentconceptions.com
Abstract: An evaluation was undertaken to test the efficacy of a 3D animated instructional video on reproductive physiology. A four-factor one-way analysis of variance (ANOVA) between groups experimental design was employed. Six universities from across the country participated in the study with subjects being randomly assigned to control or experimental conditions. The results of this initial study were favorable with an effect size greater than one standard deviation unit. This paper highlights various elements of the design used in the study. After accounting for potential areas of weakness in the design, a rationale for the use of qualitative methods in subsequent studies is provided. Specifically, during-treatment observations, and post-treatment interviews and focus groups are suggested. The benefits of this paper add toward the growing belief in the evaluation and social research communities that mixed-methods designs, when correctly utilized, can add valuable information to a study’s purpose.
Evaluation Models for Educational Technology Projects
Presenter(s):
Michael Coe,  Northwest Regional Educational Laboratory,  coem@nwrel.org
Abstract: This presentation reports findings from an NSF-funded project intended to develop improved models of evaluation research for educational uses of electronic technologies. The premise is that the evaluation of educational technology applications is hampered by oversimplified, underspecified models of both the project designs and the project evaluations. Important aspects of the project designs are often left out of the evaluation designs, and the relationships between project components may be misrepresented in the evaluation designs. These issues can lead to unproductive evaluation questions and research methods. Many of these problems could be solved, at least in part, by applying program theory and basic causal modeling concepts. The presentation will include the rationale for the project, brief examples of work we have done over the past few years, and findings from the current study.
Evaluating Educational Technology: Approaches to Collecting Meaningful Data
Presenter(s):
Shani Reid,  Macro International Inc,  shani.a.reid@macrointernational.com
Abstract: In 2005 Macro International embarked into fairly uncharted territory when they were contracted to evaluate an online learning game being developed by Maryland Public Television. The game, which is being developed with funds received from a federal Star schools grant, aims to enhance the pre-algebra and literacy skills of middle school students. The final game will consist of 9 puzzles, each with 3 levels of difficulty. During the development phase, Macro’s responsibility has been to provide stakeholder feedback on the game to project staff during various stages of development. How do you collect formative evaluation data on educational gaming technology? In fact, how do you collect quality feedback from a target audience who are between the ages of 11 and 13? In this session we will discuss the various methodologies employed (including the use of embedded game features) to obtain valid and reliable data from students and educators.

 Return to Evaluation 2008

Add to Custom Program