Evaluation 2008 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Collaborating With Clients and Competitors in Education Evaluation
Panel Session 110 to be held in Mineral Hall Section G on Wednesday, Nov 5, 4:30 PM to 6:00 PM
Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG
Chair(s):
Jon Price,  Intel Education,  jon.k.price@intel.com
Discussant(s):
Daniel Light,  Education Development Center Inc,  dlight@edc.org
Abstract: In 2000, Intel began its world-wide education programs and companion evaluation work. Since that time, both Education Development Center and SRI International as well as evaluation organizations throughout the world have been engaged as evaluators. Why would Intel engage multiple evaluators? What are the challenges, advantages and opportunities of a multi-evaluator approach? The session will include a multi-perspective discussion about how working in a collaboration of competitive organizations and a corporate client has informed the ongoing program implementation and evaluation design. The session will focus on what the presenters have learned through collaboration between and triangulation across organizations, synergistically building larger program learnings. Contributors will explore ways in which evaluators and client approach and implement evaluation within this context of cross-organization collaboration, including lessons learned and guidelines for making such collaborations productive.
Development, Growth and Sustainability: Building Collaboration as an Evaluation Practice
Roshni Menon,  Education Development Center Inc,  rmenon@edc.org
Scott Strother,  Education Development Center Inc,  sstrother@edc.org
When the challenge of creating an evaluation strategy for a new program is overtaken by the challenge of sustaining the program on a global scale, effectively managing ongoing evaluation efforts may depend on including collaboration efforts in revised strategies. By referencing the early design components of the Intel Teach teacher professional development program, this portion of the panel discussion will illustrate the challenges associated with maintaining a single agency partnership and the transition that resulted in collaborative efforts with over two dozen agencies worldwide. Transforming education systems and supporting national competitiveness are difficult, long term endeavors. On-going, embedded evaluation can help create policies that support real change.
Degrees of Collaboration with Competitive Evaluation Partners: Working Together and Separately
Ann House,  SRI International,  ann.house@sri.com
Ruchi Banot,  SRI International,  ruchi.bhanot@sri.com
A basic tension of collaborating across evaluation organizations is the question of how different evaluators can stay coordinated and consistent to provide findings that collate together, yet separate and distinct enough to explore different facets of a single program. Using Intel's Teach Essentials Online program as an example, this presentation will focus on the different levels of partnership used by SRI and EDC to collaborate (but not replicate) both larger conceptual matters (including research design and forming larger statements about the program) as well as the detailed daily tasks (including instrument design and identifying informants). This discussion will provide a picture of varying degrees of cross-organizational coordination and collaboration at different points in this evaluation work, and reflect on the strategies used to determine coordination levels.
Using Multiple Evaluators as an Evaluation Policy
Jon Price,  Intel Education,  jon.k.price@intel.com
This portion of the panel session will discuss the strategies associated with an evaluation design utilizing multiple organizations on a global scale, with emphasis on the coordination between Intel Corporation as the Grantor and SRI International/Education Development Center - Center for Children and Technology as collaborating (or primary) grant recipients. Such a strategy begins with the development of clear program goals, indicators and models that can be used to measure impact and ensure consistency and reduce variability in the evaluation designs. A discussion of the challenges encountered and considerations necessary to manage such a broad scale evaluation effort will follow, with reference to available rubrics, instruments, and observation/interview protocols.

 Return to Evaluation 2008

Add to Custom Program