|
Development Evaluation in Context: Tensions, Opportunities, and the Need for a More Integrated Approach
|
| Presenter(s):
|
| Lesli Hoey, Cornell University, lmh46@cornell.edu
|
| Mark Constas, Cornell University, mac223@cornell.edu
|
| Abstract:
The notion of development represents varied and often conflicting interests. Intentionally or unintentionally, interventions express a particular set of development priorities. While these priorities may be internally consistent to a specific program, they may not reflect the larger mandate of the implementing agency itself, the goals of parallel development initiatives, the long-term needs of local actors, or concerns that become apparent within larger geographical scales. The present paper explores the factors that determine the extent to which a given evaluation will support or undermine broader development priorities. We first provide a framework to investigate the context of development priorities within which a given program is embedded and argue that such investigations are an essential first phase of work for evaluations in developing contexts. The second part of the paper examines the consequences that such investigations have for the design, implementation and effects of development evaluation.
|
|
The Politics and Consequences of Participation in International Development Evaluation
|
| Presenter(s):
|
| Anne Cullen, Western Michigan University, anne.cullen@wmich.edu
|
| Abstract:
Despite their wide-spread use, there is a dearth of research on the impact of participatory approaches in international development evaluations. Although proponents of participatory approaches to international development evaluation assert many advantages of their use, the evidence to support these claims is largely anecdotal. Similarly, critics of participatory approaches do not have empirical data on which to base their assertions. Without systematic scientific study, it is difficult to repudiate or substantiate any of these claims. This session presents the findings of an empirical study on participatory approaches to international development evaluation undertaken to (i) better understand current trends and practice; (ii) describe the perceived impacts of participatory evaluation; and (iii) help improve future evaluation practice. Ultimately, this presentation will contribute to empirical knowledge on evaluation practice, particularly as it relates to stakeholder participation in the international development evaluation.
|
|
Organization Paradigms and Evaluation
|
| Presenter(s):
|
| Alexey Kuzmin, Process Consulting Company, alexey@processconsulting.ru
|
| Abstract:
Organizations are different. They have different 'personalities' or organizational cultures, they are built and operate in different ways. According to Larry Constantine (1993) an organizational paradigm is 'both a standard or model for an organization and a world view, a way to make sense of organizational reality'. Since evaluation should be built in the organizational reality, it's important to identify and consider organization paradigm while choosing the most relevant evaluation approach for a particular program or an organization. In this presentation we shall explore the four organization paradigms described by Constantine (closed, synchronous, random, open) from an evaluator's point of view and suggest how to design useful evaluations with consideration of organizational context.
|
|
If, Then, and So What: Theory and the Primacy of Field Experience for International Evaluation
|
| Presenter(s):
|
| Catherine Elkins, RTI International, celkins@rti.org
|
| Abstract:
Too often evaluations are driven by external stakeholders, who rarely understand the field context well enough to articulate meaningful queries, or the intervention theory well enough to test its embedded hypotheses. In international evaluation, the full complexity of the field context must inform every aspect of the evaluation: which concepts are central, what can be measured, how it can be measured, who can contribute in which ways, and the extent to or direction in which generalizability can be attempted. Without clearly articulated theory and distinguishable, testable hypotheses, a cross-cultural evaluation study risks producing findings that are little more than anecdotes regardless of its empirical evidence. This paper presents monitoring and evaluation (M&E) and evaluation case studies to integrate theoretical rigor with appreciation of key elements of the local operational context in order to produce study results that are locally and generally useful and useable toward strengthening development theory.
|
| | | |