|
The Role of Local Evaluators in Cross-Site Evaluations: Responding to Local and National Needs and Finding Shared Values
|
| Presenter(s):
|
| Courtenay Kessler, University of Wisconsin, Madison, courtenay.kessler@aurora.org
|
| Jessica Rice, University of Wisconsin, Madison, jessica.rice@aurora.org
|
| Abstract:
Project LAUNCH is a national SAMHSA grant program that has funded 24 communities to work toward improving services and systems that improve early childhood development in local communities. Our local evaluation team works closely with national cross-site evaluators to gather data to evaluate Project LAUNCH nation-wide, while also implementing a relevant local evaluation plan. We work with local program providers to identify successful and feasible data collection methods and develop useful tools. Critical steps in this process include identifying meaningful definitions consistent with LAUNCH philosophy, program practices and local values as well as identifying relevant project-specific outcomes. This presentation will focus on the need for flexibility and critical listening by the evaluator and how those traits facilitate a relevant site-specific evaluation within a cross-site evaluation, specifically focusing on the collaborative development of data collection tools that respond to both national and local needs and perspectives.
|
|
The Tension of Design and Emergence in the Developmental Evaluation of an International, Multi-Cluster, Innovation In Science, Technology, Engineering, and Math (STEM) Education
|
| Presenter(s):
|
| Clare Strawn, International Society for Technology in Education, cstrawn@iste.org
|
| Brandon Olszewski, International Society for Technology in Education, brandon@iste.org
|
| Abstract:
How do you design an evaluation process involving five consortia, each with six to twelve projects from eleven countries? We invite our AEA colleagues to join us in tackling the challenge of developmental evaluation of a multi-level, multi-project, global, online community of practice. This paper discusses what we have learned in the project's first year about managing complexity on a global scale to structure collaborative opportunities while supporting, discovering, measuring, and evaluating emergent collaboration. We discuss the multiple roles and levels of evaluation and the use of online web 2.0 collaborative tools and analytics.
This evaluation research builds on the Wenger (2010) Digital Habitats by contributing a case study of a globally scaled project investigating the relationship of collaboration and innovation. The Developmental Evaluation (Patton, 2010) approach is critical in this environment as we attempt to capture emergent stories of innovation.
|
| |