|
Crossing Country and Cultural Boundaries for Work
|
| Presenter(s):
|
| Lennise Baptiste,
Kent State University,
lbaptist@kent.edu
|
| Abstract:
The development co-operation strategy adopted in 1996 by the Organization for Economic Cooperation and Development, an international organization focused on economic and social issues. This strategy shaped two partnerships formed between one European and two Caribbean non-governmental organizations. The lessons learned from the evaluation of the partnerships centered on, cross-cultural relations, access, project plan logistics, and of negotiation of the Terms of Reference which required qualitative methodology. One key finding was that routine tasks can become monumental when evaluators are in unfamiliar terrain even though language was not a barrier. This presentation can inform evaluators who undertake projects spanning cultural and country boundaries.
|
|
Exploring Connections Between Programs, Their Evaluation and Those Involved: An Example of the Implementation of a Community-based Pilot Program for Women Living with HIV
|
| Presenter(s):
|
| S Lory Hovsepian,
University of Montreal,
sarine.lory.hovsepian@umontreal.ca
|
| Astrid Brousselle,
University of Montreal,
astrid.brousselle@umontreal.ca
|
| Joanne Otis,
University of Quebec Montreal,
otis.joanne@uqam.ca
|
| Abstract:
Background: Community-university collaborators developed and implemented “Pouvoir Partager/Pouvoirs Partagés”, a community-based program (CBP) helping women living with HIV in Montreal, Canada to manage the disclosure of their HIV status. Process and outcome evaluations were undertaken in the context of a formative evaluation. Objective: We explore the interconnections between the CBP, its evaluation and participatory processes. Methods: Interviews with stakeholders, focus groups with program participants and committee meeting observations were undertaken. Results: Extent of participatory evaluation varied according to stakeholder expertise and interests, evaluation type and other practical considerations. The evaluation process seemed to facilitate participation in program development and the program, and aid its continuation. Although evaluation contributed to improving the program and its implementation, some perceived it as strain, restricting implementation flexibility and limiting program impact. Conclusion: Planning, implementing and evaluating CBPs require that stakeholders address issues emerging from the interrelations between the program, its evaluation and participatory processes.
|
|
Evaluation Decision Making as Authoritative Allocation of Value
|
| Presenter(s):
|
| A Rae Clementz,
University of Illinois,
clementz@uiuc.edu
|
| Jeremiah Johnson,
University of Illinois Urbana-Champaign,
jeremiahmatthewjohnson@yahoo.com
|
| Abstract:
Evaluation resources are limited. There is rarely enough time, funding, people, etc., to accomplish everything we might need and want to. David Easton (1954) defined policy as "an authoritative allocation of value." In this same way, the decisions made during the course of evaluation regarding evaluation design and implementation reflect value stances and are actualized through the allocation of the limited evaluation resources at our disposal. Thus, our practical and functional decisions in evaluation practice carry value implications. Evaluation decisions are never made solely by the evaluator. They are made in context and reciprocally with the group of individuals involved in conducting the evaluation. This session discusses these value decisions and their role in evaluation practice. Separate from method or approach, this framework is intended to engage the practical implications of holding various value stances in the conduct of an evaluation.
|
|
Evaluability Assessment: What Makes a Program Ready for Evaluation?
|
| Presenter(s):
|
| Gargi Bhattacharya,
Southern Illinois University at Carbondale,
gargi@siu.edu
|
| Meghan Lowery,
Southern Illinois University at Carbondale,
meghanlowery@gmail.com
|
| Alen Avdic,
Southern Illinois University at Carbondale,
alen@siu.edu
|
| Abstract:
Evaluation of program impact should guide the development of programs; however, in real-life organizational situations, clients are often unaware of this essential program development process. Professional evaluators often encounter clients who are unsure of how they want to evaluate a program or how they want to measure program impact. Applied Research Consultants (ARC), a vertical practicum built within the doctoral program of Applied Psychology at Southern Illinois University at Carbondale, is a graduate student-run consulting firm. ARC conducts professional evaluations and provides other services (e.g., job analysis, survey research, data analysis etc.) to clients in and outside the University community. Different programs evaluated by ARC were analyzed and evaluators were interviewed to identify components of programs that made them easier to evaluate or become evaluation-ready.
|
| | | |