|
Selecting Development Interventions for Rigorous Impact Evaluations: What Matters?
|
| Presenter(s):
|
| Debazou Y Yantio,
Cameroon Ministry of Agriculture and Rural Development,
yantio@hotmail.com
|
| Abstract:
Rigorous impact evaluations are costly, and of course divert substantial amount of resources from the delivery of direct services to the program target population. The dilemma is “allocate nearly one million of US dollars to a rigorous impact evaluation or financing more schools or water pipes in remote rural areas?” Based on empirical data, the paper identifies some of the characteristics of development interventions (period, willingness of recipient country stakeholders, etc.) in the rural development sector that have undergone impact evaluations in recent past. Probit and discriminant analysis are then applied to find out which characteristics significantly determine the likeliness of sampled interventions to be subjected to impact evaluation. In order to achieve optimal use of available resources, the paper suggests some desirable attributes of programs to be earmarked for rigorous impact evaluation, opening up opportunities for improved program design and development policy making.
|
|
Culture, Science, and Data Integrity: Assessing Claims of Falsification in the Field
|
| Presenter(s):
|
| Ann Dozier,
University of Rochester,
ann_dozier@urmc.rochester.edu
|
| Arlene Saman,
One HEART Tibet,
lhamo47@hotmail.com
|
| Addie Koster,
One HEART Tibet,
adriana_aletta@ hotmail.com
|
| Pasang Tsering,
One HEART Tibet,
onehearttibet@yahoo.com
|
| Timothy Dye,
Axios International,
tim.dye@axiosint.com
|
| Abstract:
Data integrity is a core concern for researchers, program directors and evaluators. Despite the fundamental requirement for data integrity, descriptions of how claims of falsification are handled is virtually absent from the published literature. We describe how a research project in rural Tibet responded to accusations that women enrolled in the project did not meet eligibility criteria or were fabricated by enrollment staff. Responses to these claims were handled thoroughly and promptly through an open and transparent process to all staff that provided for identification of all concerns about falsification. Follow-up was then conducted for each enrolled women in question (n=55) as well as for a set of control women (n=16) for whom data had been previously obtained (to assess process reliability). Claims of falsification may well represent a clash between the scientific culture of precision and the local culture that values error free work and meeting workplace expectations.
|
|
Conducting an Online Follow-Up Survey in the Changing Political Context of Kosovo: Challenges and Findings
|
| Presenter(s):
|
| Patty Hill,
EnCompass LLC,
phill@encompassworld.com
|
| Mary Gutmann,
EnCompass LLC,
mgutmann@encompassworld.com
|
| Abstract:
As part of the Kosovo Media Assistance Program, EnCompass conducted an online follow-up survey on gender and ethnicity in the media at a historic time in Kosovo. The paper will first explore how we addressed the practical challenges of conducting the on-line survey among journalists actively involved in the media as Kosovo travels the path to independence; these challenges included reaching the respondents, language issues, working from another country, survey fatigue and respondents’ opportunity to ‘opt-out’, and timing. Also discussed in this paper will be a systems perspective to adapting the follow-up survey, including how decisions were made about what questions to keep, and what questions to change, drop, or add, in order to provide more useful information based on previous findings. Finally, the use of the follow-up survey’s findings will be discussed in the context of Kosovo’s changing political situation.
|
|
A Training Program Evaluation of an Investment Company: Taiwan’s Case
|
| Presenter(s):
|
| Chien Yu,
National Taiwan Normal University,
yuchienping@yahoo.com.tw
|
| Chin-Cheh Yi,
National Taiwan Normal University,
jack541011@gmail.com
|
| Abstract:
Globalization is a prevalent phenomenon no matter in culture, business or politics. Using business as an example, the size of companies has been expanded many times in comparison with the past. Large Taiwanese banking, insurance, and investment companies have established their branch companies in Hong Kong, mainland China and other areas around the world. The dispersive locations of branches forced the companies to utilize E-learning as a common tool in training. Business leaders always want to know the result of their investment. So how to evaluate the training effectiveness becomes a necessity. The main purpose of the case study is to explore the successful practices of a training program evaluation in a prestigious Taiwanese investment company. Several research questions are concerned: (1) What model is used for evaluating training effectiveness? (2) Why the evaluation model is selected? (3) What is the process of the evaluation? (4) What are the instrument(s) used in the evaluation process? (5) What are the results found in the evaluation? And (6) What are the problems of the evaluation and how are they solved? Suggestions based on the empirical data are provided for the case company.
|
|
Designing Evaluations and Surveys in International Contexts
|
| Presenter(s):
|
| Marc Shapiro,
MDS Associates,
shapiro@urgrad.rochester.edu
|
| Abstract:
Since the Paris Declaration, many donors have pledged to improve use of monitoring and evaluation of their programs. The contexts for these evaluations and the use of surveys in international projects differ in several respects from those in most developed contexts and often lag in quality, but underlying rationales for good evaluations remain the same. These differences affect how evaluations should be designed to measure outcomes and impacts of these types of interventions as well as affecting procedures to conduct the evaluations. This presentation is intended to spark discussion of learnings across countries, sectors, and donors. It uses the sectors of education, knowledge management, and information technology for development as examples but more broadly focuses on key differences that affect the design and implementation of surveys specifically and evaluations more generally.
|
| | | | |