| In a 90 minute Roundtable session, the first
rotation uses the first 45 minutes and the second rotation uses the last 45 minutes.
|
| Roundtable Rotation I:
Peacemaker: Balancing Paradigmatic Concerns and Client Expectations in Collaborative Evaluation |
|
Roundtable Presentation 722 to be held in Exec. Board Room on Friday, Nov 4, 2:50 PM to 4:20 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
and the Pre-K - 12 Educational Evaluation TIG
|
| Presenter(s):
|
| Scot Rademaker, University of South Florida, srademaker@mail.usf.edu
|
| Tyler Hicks, University of South Florida, tahicks@mail.usf.edu
|
| Sarah Mirlenbrink-Bombly, University of South Florida, mirlenbr@mail.usf.edu
|
| Connie Walker, University of South Florida, cwalkerpr@yahoo.com
|
| Abstract:
In an era of high stakes accountability, clients frequently request statistical analysis to inform the evaluation product. This is not necessarily a problem, depending upon the source of data. However, in many cases the types of questions do not lend themselves well to statistical analysis. This problem is especially acute in cases where data from questionnaires is the primary source of data. Non-parametric analysis may present a viable quantitative solution to this problem. In this evaluation, the authors model how to use non-parametric statistical analysis to synthesize ordinal data (e.g. Likert Scale). The authors demonstrate how to navigate the client's requests and create a balance between disciplines of inquiry in order to answer the proposed questions in an effective manner. Results from an evaluation of a university sponsored tutoring project will be delineated in order to contextualize this discussion.
|
| Roundtable Rotation II:
Mediating Value Preferences Within the Evaluation Consultancy Role |
|
Roundtable Presentation 722 to be held in Exec. Board Room on Friday, Nov 4, 2:50 PM to 4:20 PM
|
|
Sponsored by the Quantitative Methods: Theory and Design TIG
and the Pre-K - 12 Educational Evaluation TIG
|
| Presenter(s):
|
| Debbie Zorn, University of Cincinnati, debbie.zorn@uc.edu
|
| Sarah Woodruff, Miami University of Ohio, woodrusb@muohio.edu
|
| Holly Raffle, Ohio University, raffle@ohio.edu
|
| Barry Oches, Ohio University, oches@ohio.edu
|
| Abstract:
Evaluators often serve as the mediators among groups that have different value preferences. For example, the value preferences of local project staff regarding evaluation sometimes conflict with those of federal and state funding agencies that set strict standards for project design and evaluation methodology/rigor. This roundtable will discuss efforts by a statewide cross-project evaluation team, working with five local Mathematics and Science Partnership (MSP) projects, to mediate those differences. MSP projects are awarded federal funds to improve K-12 teachers' content knowledge in order to improve students' academic achievement. Our cross-site evaluation team has assumed a consultative role for helping local projects develop and conduct more rigorous and useful local evaluations that address local, state, and federal information needs. We have found that it is essential to develop a framework that promotes collective value preferences while supporting local evaluations that may reflect somewhat different value preferences.
|