2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Transferring Evaluation Experience Across Program Contexts: Discursive Evaluation With two National Science Foundation ITEST Programs, Carnegie Mellon University's ACTIVATE and ALICE
Multipaper Session 913 to be held in Capistrano A on Saturday, Nov 5, 12:35 PM to 2:05 PM
Sponsored by the Organizational Learning and Evaluation Capacity Building
Chair(s):
Cynthia Tananis, University of Pittsburgh, tananis@pitt.edu
Abstract: School reform programs are often faced with an atmosphere of diversity and complexity where communication and organizational learning are increasingly difficult. As evaluators, our challenge is to create common ground from which to speak about program inputs, implementation, and goals. We have found that discursive evaluation strengthens working relationships with clients and allows for transferring and leveraging knowledge across projects. This panel focuses on discursive evaluation with two National Science Foundation ITEST funded programs of Carnegie Mellon University's computer science department. Our discursive relationship with the program teams has increased evaluation capacity across both projects, both in content expertise and effective evaluation practices in computing education. By investing in the relationships with our clients, not only are the common educational endeavors strengthened but so too are individual capacities. In this paper, we describe the strengths and challenges that accompany a discursive evaluation approach.
The Role of the Activate Workshops in Teachers' Professional Growth and Student Learning: Measuring the Effectiveness of Teachers' Professional Development in Computer Science Within a K-12 Education Context
Yuanyuan Wang, University of Pittsburgh, yuw21@pitt.edu
The ACTIVATE year 1 evaluation included summer workshop surveys, follow-up survey, and follow-up interviews. The instruments attempted to conduct an integrated series of evaluation activities for K-12 teacher professional development in computer science. The summer workshop surveys consisted of a baseline survey for all participants, and pre-post workshop surveys and post-workshop skills assessment for each of the three workshops (Alice, Computational Thinking, and Java). The follow-up survey and interviews focused on teachers' implementation of workshop materials/activities and their impact on students' interests in their computer science course and future computer-related careers. Findings indicated that the goals of the ACTIVATE are substantially achieved. Specifically, teachers made good use of the workshop materials/activities in their classroom. Teachers strengthened their content knowledge and skills in computer science after their participation in the workshop(s), and this strengthened knowledge-base benefited student learning and contributed to students' increased interests in computer-related careers.
Go Ask Alice: Faculty Mentoring and the Implementation of Alice 3.0 in Community College Contexts
Keith Trahan, University of Pittsburgh, kwt2@pitt.edu
Cara Ciminillo, University of Pittsburgh, ciminill@pitt.edu
Community college faculties are notoriously disconnected. Thus, reform programs designed to change the way faculty teach and students learn must find a way to gain traction. For CMU's Alice program, the solution was the combination of a faculty mentor network and student interest in graphics, animation, and storytelling. The ALICE year 1 evaluation consisted of baseline and end of course surveys administered to the students of participating faculty at community colleges in New Jersey, Texas, and Pennsylvania. Courses in which ALICE was implemented included introductory, general, and advanced computer programming courses. At the end of the year interviews with participating faculty and key program personnel were conducted to collect information on both the implementation of instructional practices and the experience in the ALICE mentoring network. The focus of the ALICE evaluation was threefold: student and faculty experience in courses utilizing Alice and faculty perspectives on the Alice mentoring network.
Discursive Evaluation: A Process of Capacity Building for Both Evaluators and Program Leaders
Cara Ciminillo, University of Pittsburgh, ciminill@pitt.edu
Keith Trahan, University of Pittsburgh, kwt2@pitt.edu
Having a discursive relationship with the program teams has increased evaluation capacity across both the ACTIVATE and ALICE projects, in both computer programming content expertise and effective evaluation practices in computing education. As well, transfer and application of knowledge has helped to inform future funding proposals and in fact has already helped to inform the design of one of our newest projects, Duke Scale Up. By investing in the relationships with our clients, not only are the common educational endeavors strengthened but so too are individual capacities. In this paper, we describe the strengths and challenges that accompany a discursive evaluation.

 Return to Evaluation 2011

Add to Custom Program