2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Clients Speak Out About Evaluation
Multipaper Session 532 to be held in Texas E on Friday, Nov 12, 9:15 AM to 10:45 AM
Sponsored by the Evaluation Use TIG
Chair(s):
Susan Tucker,  Evaluation & Development Associates, sutucker1@mac.com
Discussant(s):
Lyn Shulha,  Queen's University at Kingston, lyn.shulha@queensu.ca
Exploring Evaluation Use: Multiple Representations of Evaluation Findings
Presenter(s):
Michelle Searle, Queen's University at Kingston, michellesearle@yahoo.com
Christine Doe, Queen's University at Kingston, christine.doe@queensu.ca
Lyn Shulha, Queen's University at Kingston, lyn.shulha@queensu.ca
Susan Elgie, Independant Consultant, selgie@sympatico.ca
Abstract: Evaluation as systematic inquiry has many objectives; an important one is promoting understanding and assimilation of results by the client(s). Complex evaluations benefit from the use of varied reporting methods. This research investigates the process of tailoring the representation of findings to facilitate deep understanding of the evaluation context. After a year of qualitative data collection, our four-member evaluation team recognized that varied representations of the findings served the client’s needs by speaking to multiple audiences. In this paper, we look at our documented efforts to promote evaluation use by creating purposeful and varied representations of the evaluation findings. We draw on data from the deliberations about the evaluation findings within our team and with our clients, a focus group with the clients and a stakeholder interview. This paper describes our process, provides samples and considers the implications for drawing on multiple representations in reporting evaluation findings.
Enhancing Evaluation Quality Through Client-Centered Reporting
Presenter(s):
Micheline Magnotta, 3D Group, mmagnotta@3dgroup.net
Abstract: Every high quality evaluation begins with understanding the client’s and stakeholder’s needs. Yet, most evaluation reports are written with little attention paid to the report users, and a great amount of attention paid to the technical aspect of quality, such as by following a report template that an organization has proudly “perfected” over the years. Adding to the problem, evaluators often perceive that following a template is justified because it increases cost-effectiveness, when in fact funds may be needlessly spent. This session begins with the User-Based perspective of quality, and examines a spectrum of qualitative and quantitative report formats that have been effectively used in the field. Practical suggesting will help evaluators shift their definition of report quality from a “Product-Based View” to one that also embraces the “User-Based View” (Schwandt, 1990), resulting in evaluation reports that are higher quality because clients and stakeholders obtain greater use from reports.
Measuring Evaluation Use and Influence Among Project Directors of State Gaining Early Awareness and Readiness for Undergraduate Programs Grants
Presenter(s):
Erin Burr, Oak Ridge Institute for Science and Education, erin.burr@orau.org
Jennifer Morrow, University of Tennessee, Knoxville, jamorrow@utk.edu
Gary Skolits, University of Tennessee, Knoxville, gskolits@utk.edu
Abstract: This paper describes the development of an instrument used to measure evaluation use, influence, and factors that have an impact on the use of evaluations among state project directors of the national Department of Education program, "Gaining Early Awareness and Readiness for Undergraduate Programs"(GEAR UP). The survey instrument was administered to 17 state project directors via online and paper-and-pencil surveys. Results indicated that GEAR UP project directors are using their program evaluation reports for instrumental, conceptual, symbolic, and process-related purposes. Project directors reported evaluation influence at the individual, interpersonal, and collective levels. Both implementation factors and decision and policy setting factors had an impact on project directors' decisions to use their programs' evaluations. The study’s limitations, implications, and planned future research will be discussed.
Chilean Evaluated Teachers Give Their Opinions About the National Teacher Evaluation System
Presenter(s):
Dante Cisterna - Alburquerque, Michigan State University, cisterna@msu.edu
Abstract: A study was conducted in a Chilean district in order to describe opinions of evaluated teachers’ and school principals about the national teacher performance evaluation system and the ways they are using the reported information about teachers’ performance. Evaluated teachers and principals positively value the clear procedures and adequate organization of the system. By contrast, the overwhelming tasks required for responding to the instruments and the quality of the reports are scarcely evaluated. They also make little use of the information reported for teachers’ improvement. The results of this study suggest that the national and large-scale assessment, whose official purpose is formative, has focused on the quality of its operative procedures and its technical aspects, at the expense of strengthening the use, details and pertinence of the reported information for their intended users.

 Return to Evaluation 2010

Add to Custom Program