Return to search form  

Session Title: Technological Tools That Build Evaluation Capacity: The Power of Blogs, Clickers and Web-based Customized Reports
Multipaper Session 385 to be held in Hopkins Room on Thursday, November 8, 1:55 PM to 3:25 PM
Sponsored by the Integrating Technology Into Evaluation
Chair(s):
Paul Longo,  Touro Infirmary,  longop@touro.com
Evaluating Online Community: Finding Connections in an Informal Blog-based Network
Presenter(s):
Vanessa Dennen,  Florida State University,  vdennen@fsu.edu
Abstract: Online communities have become increasingly prevalent with each passing year. As organizations have come to develop and rely on web-based interaction and communities to support activities such as communication, learning, knowledge management, and marketing, the need to evaluate these communities has arisen. Evaluating the dynamic relationships within these communities – their strength, their meaning, and the relative power among participants – can prove challenging. This session will provide a review of methods that have been used to map online communities and demonstrate how community network mapping was done in an evaluation of a loosely-defined blog-based community of practice.
Assessing Intuitive Responding as a Function of a Technological Classroom Initiative: Attributes and Values of Computer-assisted Data Collection
Presenter(s):
Sheryl Hodge,  Kansas State University,  shodge@ksu.edu
Iris M Totten,  Kansas State University,  itotten@ksu.edu
Christopher L Vowels,  Kansas State University,  cvowels@ksu.edu
Abstract: While much of the recent evaluation data collection literature encompasses Web-based surveying, there has been little focus on the attributes of other computer-assisted data collection strategies. For the university instructor, the advantages of immediate student feedback are substantial; moreover, for evaluators, computer-assisted data collection techniques produce a plethora of valuable information related to important aspects of data collection protocols. Specifically, such indices as time required to answer cognitive questions as well as frequency of changes in response may provide key evaluation protocol information regarding age-old assumptions of test-taking strategies and their relation to student outcomes. These indices, characterized collectively as indicators of intuitive decision-making, stand to inform evaluators about the value of this data collection tool for evaluation practice. As such, this study will examine individual differences between undergraduates' computer-assisted classroom response characteristics and their accompanying perceptions associated with using this tool within large university classrooms.
Search Form