2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Exploring New Roles and Responsibilities in Educational Evaluation
Multipaper Session 597 to be held in Ventura on Friday, Nov 4, 8:00 AM to 9:30 AM
Sponsored by the Pre-K - 12 Educational Evaluation TIG
Chair(s):
Catherine Nelson,  Independent Consultant, catawsumb@yahoo.com
Discussant(s):
Tom McKlin,  The Findings Group, tom@thefindingsgroup.com
The Three Rs of Existing Evaluation Data: Revisit for Relevance, Reuse, and Refine
Presenter(s):
Kate Clavijo, Pearson, kateclavijo@gmail.com
Kelli Millwood, Pearson, kelli.millwood@pearson.com
Abstract: Existing evaluation data is data collected or recorded at an earlier time, often for an entirely different purpose than the research at hand. This paper will describe how to save resources by using existing data to inform decisions. Examples of collecting and then revisiting, reusing, and refining existing evaluation data will be described. The first example describes how we identify school district and community needs prior to implementing professional development. The second example compares evaluation outcomes with face to face and on-line professional development models. A third example illustrates how existing evaluation data answers questions about the impact professional development has on novice and experienced educators. While there is nothing new about using secondary data in evaluation, the examples set forth in this presentation will hopefully encourage participants to rethink and revitalize how they approach the use of secondary data.
Evaluation and Educational Innovation: Coping and Growth
Presenter(s):
Talbot Bielefeldt, International Society for Technology in Education, talbot@iste.org
Brandon Olszewski, International Society for Technology in Education, brandon@iste.org
Abstract: A university worked with rural schools to implement a locally-developed integrated science curriculum under a U.S. Math Science Partnership grant. Evaluators found that standard assessments were not sensitive to non-standard interventions. The educators' innovations led to transformation of an evaluation department's skills, and positioned the organization to address new challenges. Lacking relevant science assessments, evaluators and program staff created new evaluation tools to accompany the curriculum. This presentation documents trials, errors, and eventual success as evaluators changed their role and adopted a new suite of assessment skills. Presenters will discuss the logistics of bringing new skills (in this case, Item Response Theory) in-house versus outsourcing. Professional development, time, and technology are all factors in this transition. Presenters will also discuss the implications of these changes for an organization that wants to participate in K-12 educational evaluations in the United States and other countries.
What is a 'Good' Program? A Comprehensive Meta-Evaluation of the Project Lead The Way (PLTW) Program
Presenter(s):
Melissa Chapman Haynes, Professional Data Analysts Inc, melissa.chapman.haynes@gmail.com
David Rethwisch, University of Iowa, drethwis@engineering.uiowa.edu
Abstract: What makes a program 'good?' This could mean many different things to many different people. Good in the sense that we should continue to fund it? Good in the sense that expected outcomes were obtained? And so it goes. There were numerous conversations at AEA 2010 about how evaluators (and program leaders) are often restricted by funding sources as far as what they are able to examine. Unfortunately, it is not typical for evaluators to have access to prior evaluations of programs, unless they have personally worked on those evaluations. The focus of this work is a meta-evaluation of Project Lead The Way, a secondary engineering program with fast growth but equivocal results about program outcomes. Drawing upon a small but growing group of PLTW researchers and evaluators, technical reports and other unpublished work was obtained through personal request, and the Program Evaluation Standards were used to meta-evaluate PLTW.
But What Does This Tell Me? Teachers Drawing Meaning From Data Displays
Presenter(s):
Kristina Ayers Paul, University of South Carolina, paulka@mailbox.sc.edu
Ashlee Lewis, University of South Carolina, lewisaa2@mailbox.sc.edu
Min Zhu, University of South Carolina, zhum@mailbox.sc.edu
Xiaofang Zhang, University of South Carolina, zhang29@mailbox.sc.edu
Abstract: Educators are continually asked to use data to inform instructional decision-making, yet few educators have the technical training needed to interpret complex data sets. Evaluators providing data to non-technical audiences should be mindful of the need to present data in ways that are understandable, interpretable, and comparable (May, 2004). Researchers from the South Carolina Arts Assessment Program (SCAAP) are exploring new methods of sharing program assessment data with teachers, principals, and regional coordinators in ways that will be meaningful to these groups. In this paper presentation, we will share the results of a study examining educators' reactions to various data display formats. The study uses a think aloud protocol to examine teachers' thinking as they attempt to draw meaning from assessment data presented through an assortment of data displays.

 Return to Evaluation 2011

Add to Custom Program