2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Viabilities of Technologies in Evaluation Research
Multipaper Session 116 to be held in TRAVIS B on Wednesday, Nov 10, 4:30 PM to 6:00 PM
Sponsored by the Integrating Technology Into Evaluation
Chair(s):
Margaret Lubke,  Utah State University, mlubke@gmail.com
Improving Evaluation Quality Through Use of an Interactive Database
Presenter(s):
Jan Middendorf, Kansas State University, jmiddend@ksu.edu
Aaron Schroeder, Kansas State University, aaron@ksu.edu
Sarah Bradford, Kansas State University, sbradford@ksu.edu
Valerie York, Kansas State University, vyork@ksu.edu
Abstract: As evaluator for Kan-ed, a statewide network initiative funded by the Kansas Legislature, the Office of Educational Innovation and Evaluation created and maintains the Kan-ed Membership Database, a relational database that houses all pertinent information related to the evaluation. Due to the project’s large size and the need for OEIE be able to respond quickly to client data requests, it is vital that all evaluation data is stored in one centralized location for ease of use. This high-quality, interactive tool serves to quickly generate data requests and reports for the client and other key stakeholders, while maintaining a large bank of information collected during the past seven years of the evaluation project. This presentation will provide a background on the information maintained within the database and how that information is used on a daily basis. Presenters will also discuss ways information is kept accurate and up-to-date.
Lessons Learned From Using Technology to Increase Study Participation Among Child Welfare Service Recipients
Presenter(s):
Lara Kaye, Center for Human Services Research, lkaye@uamail.albany.edu
Lynn Warner, State University of New York at Albany, lwarner@uamail.albany.edu
Rose Greene, Center for Human Services Research, rgreene@uamail.albany.edu
Corinne Noble, Center for Human Services Research, cnoble@uamail.albany.edu
Abstract: The use of personal communication technologies (e.g., email, text messaging and cell phones) may facilitate recruitment and retention of persons who are often under-represented in evaluation studies, including recipients of services that are potentially stigmatizing, such as those delivered in child welfare settings. This presentation emphasizes lessons learned when alternatives to traditional telephone and mail data collection methods were offered in a multi-county study whose main purpose was to evaluate satisfaction with strength-based services received by families at risk of child abuse or neglect. In light of results that few people provided email or text contact information, and that the great majority of satisfaction surveys were completed through traditional methods, discussion will focus on multiple barriers to using technology (e.g., access, comfort, confidentiality concerns) and implications for improving the viability of technologies in evaluation research involving under-represented groups.
Database Use in Evaluation Research: Opportunities and Challenges for Supporting Continuous Improvement of Partnerships, Programs, and Projects
Presenter(s):
William R Penuel, SRI International, william.penuel@sri.com
Barbara Means, SRI International, william.penuel@sri.com
Abstract: This paper presents examples of how evaluators can support partnerships, programs, and projects in using large-scale, longitudinal databases for continuous improvement. We argue that such databases can be valuable in supporting continuous improvement when evaluators and practitioners create partnerships in which (1) data inform program designers about potentially effective models they can adapt, (2) data are complemented by locally developed implementation and outcome measures; and (3) data from different institutional sectors enable analysis of cross-contextual change in outcomes for individuals and settings. We develop examples of each of these potential uses and detail the roles evaluators can play in supporting continuous improvement.
Evaluating Technology in Health Care: Testing the Usability of a Clinical Trial Query Tool Using Think Aloud Methods
Presenter(s):
Stuart Henderson, University of California, Davis, stuart.henderson@ucdmc.ucdavis.edu
Estella Geraghty, University of California, Davis, estella.geraghty@ucdmc.ucdavis.edu
Julie Rainwater, University of California, Davis, julie.rainwater@ucdmc.ucdavis.edu
Abstract: In the health care field, new technology is being constantly introduced to improve patient care, streamline medical record keeping, and increase clinicians’ access to information. Evaluators can play an important role in the development and improvement of new health technology by developing effective methods to evaluate its implementation and usability from the users’ point of view. In this paper, we discuss the evaluation of a clinical trial query tool (based on Harvard University’s Informatics for Integrating Biology and the Bedside-i2b2 tool) adapted by the University of California Davis Health System. To understand clinicians’ experiences using the tool, we developed an evaluation toolkit that included surveys, think alouds, and cursor movement analysis (through Camtasia Studio software). Focusing on the think aloud and cursor movement analysis, we discuss the opportunities and challenges these methods present. This project provides an example of both evaluating the use of technology and using technology in evaluation.

 Return to Evaluation 2010

Add to Custom Program