| Session Title: Yes, When Will We Ever Learn? How Evaluators Can Learn Better Ways to Understand Cause and Effect |
| Expert Lecture Session 569 to be held in International Ballroom C on Friday, November 9, 11:15 AM to 12:00 PM |
| Sponsored by the Systems in Evaluation TIG |
| Chair(s): |
| Bob Williams, Independent Consultant, bobwill@actrix.co.nz |
| Presenter(s): |
| Patricia Rogers, Royal Melbourne Institute of Technology, patricia.rogers@rmit.edu.au |
| Discussant(s): |
| Bob Williams, Independent Consultant, bobwill@actrix.co.nz |
| Abstract: The substantial international efforts currently underway to improve the quality of evaluations, particularly in international development, have drawn attention to inadequacies in providing credible evidence of impact - most notably in the report "When will we ever learn?". Remarkably, these efforts have focused almost exclusively on the use of randomized control trials, with little or no recognition of their limitations or the development of alternatives that are more suited to the evaluation of complex interventions in open implementation environments. This session will turn the question of evaluation and learning onto the evaluation community itself and ask why the theory and practice of evaluation has been so slow to learn from current scientific thought, and remains largely bogged down in outdated approaches to causal attribution. Advocates of so-called scientific approaches to impact evaluation rely exclusively on the counter-factual argument for causal attribution - developing information about what would have happened in the absence of the intervention. This type of analysis fails to take into account more complex causal relationships - such as where an intervention is necessary but not sufficient (with other contributing factors needed for success), or sufficient but not necessary (with alternative causal paths available), or where the causal relationships are of interdependence not simple linear causality. This paper compares examples of the logic and methods of causal analysis using traditional 'scientific' evaluation and those that draw on complexity science. It discusses possible reasons for the failure of advocates for 'scientific' evaluation to learn from current scientific thinking and how this might be done. |