|
Theoretical and Practical Implications of Evaluation Capacity Building in Taiwanese K-9 School Settings
|
| Presenter(s):
|
| Shu-Huei Cheng, National Taiwan Normal University, shcheng@ntnu.edu.tw
|
| Abstract:
Numerous empirical studies have been conducted on evaluation capacity building (ECB). However, one key gap is a lack of research related to evaluation capacity (EC) at an organizational level, especially in non-American schools. Consequently, the purpose of this study was to explore the essential dimensions of EC in K-9 schools in Taiwan, along with the approaches to ECB. The study included semi-structured interviews with evaluation experts in universities as well as practitioners in K-9 schools. Consistent with the existing literature, this study identified the main dimensions of EC as supportive leadership, culture, structures, and resources. The sub-dimensions, however, took different shapes in Taiwanese schools and revealed a relationship between EC and contexts. Also, the governments, evaluation experts, and schools all played significant roles in ECB. This study concluded with implications for theory and practice.
|
|
Advancing the Value of Evaluation in a Medical School
|
| Presenter(s):
|
| Derek Wilson, University of British Columbia, derek.wilson@ubc.ca
|
| Karen Joughin, University of British Columbia, karen.joughin@ubc.ca
|
| Leonie Croydon, University of British Columbia, leonie.croydon@ubc.ca
|
| Chris Lovato, University of British Columbia, chris.lovato@ubc.ca
|
| Abstract:
In medical schools across Canada, program evaluation has historically had a very limited, circumscribed presence and role within the larger organization. At the University of British Columbia, a formal evaluation unit was established in 2004 when the medical school expanded to a multi-site program model. The mandate of the unit includes evaluation of the MD undergraduate and postgraduate programs, as well as long-term studies of the impact of program expansion. Within this broad mandate, a key focus of efforts to date has been on the evaluation of the undergraduate curriculum. This paper will describe recent and historical strategies employed to build a strong, utilization-focused curriculum evaluation agenda, and to advance the “value” of evaluation by building the culture and capacity for evaluation. These strategies include broadening the scope of evaluation efforts, enhancing the utility of products, promoting engagement/participatory approaches, supporting self-directed evaluation (e.g., E-CLIPS), and establishing infrastructure.
|
|
Knowledge Creation in Healthcare Organizations as a Result of Individuals' Participation in the Executive Training in the Use of Evidence Informed Decision Making
|
| Presenter(s):
|
| Francois Champagne, University of Montreal, francois.champagne@umontreal.ca
|
| Louise Lemieux-Charles, University of Toronto, l.lemieux.charles@utoronto.ca
|
| Gail MacKean, University of Calgary, glmackea@ucalgary.c
|
| Trish Reay, University of Alberta, trish.reay@ualberta.ca
|
| Jose Carlos Surez Herrera, University of Montreal, joseko70@hotmail.com
|
| Malcolm Anderson, Queens University, andersnm@post.queensu.ca
|
| Nathalie Dubois, Montreal Public Health, ndubois@santepub-mtl.qc.ca
|
| Abstract:
Evaluations of attemps to improve the use of evidence informed decision making in healthcare organizations have focused on impact on individual skills and knowledge and failed to grasp how learning processes occur in the organizations. We conducted a research project on the organizational impact of two programs aiming at developing capacity and leadership to optimize the use of evidence in decision making . To guide our evaluation, we developed a logic model based on Nonaka Dynamic Theory of Organizational Knowledge Creation. We used multiple cases studies using embedded units of analysis and relying on a triple comparative design. In each case, we collected data through interviews and documentation. Our results showed an impact on the immediate work environment of the trainees . The findings emphasized the importance of multiple pattern of interactions within the organization and of focusing on identifying contextual conditions that faciliates and impedes the programs' impact.
|
|
Training Program Staff in the Use of Action Research: An Internal Evaluator-External Evaluator Collaborative Adventure
|
| Presenter(s):
|
| Georgia Kioukis, EducationWorks, gkioukis@educationworks.org
|
| Vonda Johnson, Paragon Applied Research and Evaluation, vjohnson@paragon-are.net
|
| Abstract:
EducationWorks, an education non-profit organization, runs three after-school programs located in Camden, New Jersey. When their funder mandated that an action research method be employed as part of the program evaluation, the internal evaluator and the external evaluator partnered to ensure that a participatory and collaborative action research framework was instituted. McNiff and Whitehead (2006) describe action research as 'a form of enquiry that enables practitioners everywhere to investigate and evaluate their own work.' Action research was embraced as a means to build evaluation capacity for these program sites. This paper will present details regarding the training, staff reactions to the training, staff progress using the action research approach, and perspectives on an internal-external evaluator partnership.
McNiff, Jean and Jack Whitehead (2006) All You Need to Know about Action Research. Sage Publications: Thousand Oaks, CA.
|
| | | |