2011

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Systems Thinking Evaluation Tools and Approaches for Measuring System Change
Multipaper Session 910 to be held in Avila B on Saturday, Nov 5, 12:35 PM to 2:05 PM
Sponsored by the Systems in Evaluation TIG
Chair(s):
Mary McEathron,  University of Minnesota, mceat001@umn.edu
Effective Incorporation of Systems Thinking in Evaluation Practice: An Integrative Framework
Presenter(s):
Kanika Arora, Syracuse University, arora.kanika@gmail.com
William Trochim, Cornell University, wmt1@cornell.edu
Abstract: The support for systems thinking in evaluating complex social programs has increased markedly in recent years. However, despite there being a general agreement on the holistic advantage provided by the systems approach, the practical implementation of systems thinking remains challenging. For many practitioners, the most effective way to integrate systemic thinking in conventional evaluation activities is still ambiguous. In response, this paper aims to develop an integrative theoretical framework that connects key elements in the systems paradigm to foundational concepts in the evaluation literature. Specifically, an attempt is made to link systemic ideas of inter-relationships, perspectives and boundaries to Scriven's classification of evaluation types and to Campbell's validity categories. With this framework, we can begin to explore strategies for enhanced implementation of systems thinking in mainstream evaluation practice.
Keeping Track in Complicated or Complex Situations: The Process Monitoring of Impacts Approach
Presenter(s):
Richard Hummelbrunner, OEAR Regionalberatung, hummelbrunner@oear.at
Abstract: This monitoring approach systematically observes those processes, which are expected to lead to results or impacts of an intervention. It builds on the assumption that inputs (as well as outputs) have to be used by someone to produce desired effects. A set of hypotheses are identified on the desired use of inputs or outputs by various actors (e.g. partners, project owners, target groups), which are considered decisive for the achievement of effects. These hypotheses are incorporated in logic models as statements for 'intended use', and these assumptions are monitored during implementation - whether they remain valid, actually take place - or should be amended (e.g. to capture new developments or unintended effects). The paper describes the approach as well as the experience gained in Austria and beyond, in particular applications for monitoring programmes, to provide an adequate understanding of their performance under more complex and dynamic implementing conditions.
The Development and Validation of Rubrics for Measuring Evaluation Plan, Logic Model, and Pathway Model Quality
Presenter(s):
Jennifer Urban, Montclair State University, urbanj@mail.montclair.edu
Marissa Burgermaster, Montclair State University, burgermastm1@mail.montclair.edu
Thomas Archibald, Cornell University, tga4@cornell.edu
Monica Hargraves, Cornell University, mjh51@cornell.edu
Jane Buckley, Cornell University, janecameronbuckley@gmail.com
Claire Hebbard, Cornell University, cer17@cornell.edu
William Trochim, Cornell University, wmt1@cornell.edu
Abstract: A notable challenge in evaluation, particularly systems evaluation, is finding concrete ways to capture and assess quality in program logic models and evaluation plans. This paper describes how evaluation quality is measured quantitatively using logic model and evaluation plan rubrics. Both rubrics are paper and pencil instruments assessing multiple dimensions of logic models (35 items) and evaluation plans (73 items) on a five point scale from one to five. Although the rubrics were designed specifically for use with a systems perspective on evaluation plan quality they can potentially be utilized to assess the quality of any logic model and evaluation plan. This paper focuses on the development and validation of the rubrics and will include a discussion of inter-rater reliability, the factor analytic structure of the rubrics, and scoring procedures. The potential use of these rubrics to assess quality in the context of systems evaluation approaches will also be discussed.
Stories and Statistics with SenseMaker®: New Kid on the Evaluative Block
Presenter(s):
Irene Guijt, Learning by Design, iguijt@learningbydesign.org
Dave Snowden, Cognitive Edge, dave.snowden@cognitive-edge.com
Abstract: SenseMaker®, developed by Dave Snowden, is an innovative newcomer to evaluative practice, with experiments in 2010 and 2011 pioneering its application in international development. This paper draws on examples in Kenya (community development) and Ghana/Uganda/global policy (water services) to illustrate how several persistent dilemmas in the evaluation profession can be overcome. SenseMaker® helps organizations: focus on shifting impact patterns as perceived by different perspectives generate databases, people's life libraries, that allow SenseMaker® if facilitated and linked to decision makers - creating evidence-based policy generate rolling baselines to continually update evidence base enable cross-silo thinking (including cross-organisational) and overcome narrow understanding of attribution of efforts seek surprise explicitly rather than viewing people's lives through our own concepts more grounded and diverse feedback to donors, thus more local autonomy generating actionable insights, based on very concrete needs, via peer to peer knowledge management.

 Return to Evaluation 2011

Add to Custom Program