|
Enhancing Evaluation Stakeholder Responsiveness Through Collaborative Development of Data Collection Instruments
|
| Presenter(s):
|
| Karen Kortecamp, George Washington University, karenkor@gwu.edu
|
| Abstract:
This paper reveals that stakeholder responsiveness to evaluation findings of a multi-year Teaching American History funded professional development project was enhanced through collaborative development of data collection instruments. Prior to the launch of the professional development program, the collaborative process engaged project historians, the academic project director and the evaluator in examining the underlying assumptions about what knowledge and skills were of most value for teachers and students. Those discussions led to a series of thoughtful deliberations about how best to measure the knowledge and skills that were identified. This paper discusses the collaborative process and decision-making that led to development high quality data collection instruments and to contributed to stakeholder responsiveness.
|
|
Reflecting on Practice in Evaluating Culturally Competent Teaching Strategies
|
| Presenter(s):
|
| Corina Owens, University of South Florida, cmowens@usf.edu
|
| Michael Berson, University of South Florida, berson@coedu.usf.edu
|
| Liliana Rodriguez-Campos, University of South Florida, liliana@usf.edu
|
| Abstract:
Cultural competence is an illusive social construct that has a tendency towards socially acceptable answers and can be thought of as a hot button topic. People have a tendency to respond to certain types of questions in a manner that will be viewed in a favorable light by others. Items discussing one’s self -perception or self-rating of culturally competent practices in a classroom can be easily categorized as socially desirable. Evaluators need to understand the issues around social desirability when attempting to measure cultural competence as evidenced by the lessons learned through a collaborative evaluation that measured culturally competent instructional strategies, which were infused throughout a civic education professional development program. The purpose of this paper is to share these lessons learned to help other evaluators gain a better understanding of ways to evaluate cultural competent teaching strategies.
|
|
Summative Evaluation for Marketing of Science Teachers and Induction (MOSTI): An Eclectic Evaluation Approach to the Improvement of Science Teacher Recruitment, Induction, and Retention in the Middle School Grades
|
| Presenter(s):
|
| Bryce Pride, University of South Florida, bryce_pride@msn.com
|
| Merlande Petit-Bois, University of South Florida, petitbois.m@gmail.com
|
| Robert Potter, University of South Florida, potter@cas.usf.edu
|
| John Ferron, University of South Florida, ferron@usf.edu
|
| Abstract:
This paper focuses on lessons learned from a program providing recruitment, induction, and retention activities for three cohorts of career change individuals moving into teaching as an alternative career. Taking an eclectic approach (Fitzpatrick, Sanders & Worthen, 2004), we evaluated the extent to which this program helped second career teachers prepare to be middle school science teachers and be retained as permanent teachers in the school system. To inform the report, quantitative and qualitative data were collected from surveys, content exams, classroom observations, mentor logs, and a focus group. Ideas for improvement were provided from the perspective of teachers, mentors, administrators and evaluators. Multiple perspectives are used to gain an understanding of the effectiveness of program implementation and to recommend suggestions for improvement. Utilizing feedback from teachers and mentors regarding training sessions and program needs has assisted MOSTI administrators with decisions for program improvement.
|
|
Evaluation Capacity Building in Health Disparities Research: Achieving Empowerment Using the Model for Collaborative Evaluations (MCE)
|
| Presenter(s):
|
| LaShonda Coulbertson, Center for Equal Health, ltcoulbertson@msn.com
|
| Desiree Rivers, Center for Equal Health, drivers@health.usf.edu
|
| Abstract:
Evaluation capacity-building in research oriented settings is not without its challenges. While research is easily adopted and planned for, evaluation, particularly in the context of community based research, finds itself relegated to an afterthought in the planning process. Utilizing a framework, such as the Model for Collaborative Evaluations (MCE) (Rodriguez-Campos, 2005), evaluation capacity can be increased, while simultaneously demonstrating the significance and impact of a well planned, “standardized” evaluation to the research process. The labor of capacity-building should equip organizations to evaluate the evaluator by empowering them to critically view the evaluation process and the methods undertaken, on a sturdy foundation. The authors will discuss an evaluation capacity building process for a $6M federally funded health disparities project, the potential for evaluation in health disparities research, and the level to which advocating in this direction will lead to innovation in the health models currently utilized (or not) in this field.
|
| | | |