|
Session Title: Evaluation From a Self-organizing Versus Predictive Systems Perspective: Examples From the Field
|
|
Panel Session 374 to be held in International Ballroom C on Thursday, November 8, 1:55 PM to 3:25 PM
|
|
Sponsored by the Systems in Evaluation TIG
|
| Chair(s): |
| Beverly Parsons,
InSites,
bevandpar@aol.com
|
| Abstract:
Using three cluster evaluations, the presenters illustrate the importance of investigating the unplanned and unpredictable dimensions as well as the planned and predictive dimensions of complex human systems. They explore distinctions between organized and self-organizing system dynamics and how evaluation designs need to consider which aspect of the system is being addressed. They emphasize self-organizing systems and how evaluations can be designed to address these self-organizing systems that persist in a state of disequilibrium and have a densely intertwined web of interacting agents (e.g., subgroups, individuals). Each agent is responding to other agents and the environment as a whole and continually adapting. The complexity of such systems prevents predictions using models based on a few variables as can be done for organized dimensions of a system. The session is designed to include small group interactions around each case and a concluding discussion to integrate ideas across the three cases.
|
|
Contrasting Evaluation Designs for Predictive and Self-organizing Dimensions of Complex Human Systems
|
| Beverly Parsons,
InSites,
bevandpar@aol.com
|
|
This presentation provides a framework for understanding how differences in dynamics with a given system can be viewed through different evaluation approaches. Many traditional evaluation methods are grounded in assumptions that evaluators are evaluating predictable, controllable systems where cause-and-effect and rational principles can be tested, applied, repeated, and validated with comfortable regularity. Actually, most of the systems that evaluators are looking at today are complex adaptive systems. Complex adaptive systems include a self-organizing dimension that is emergent rather than hierarchical and its future is often unpredictable. It can well be a place of high creativity, innovation and new modes of operation but such features may be missed if the evaluation is approached only through the traditional lens. The presenter will illustrate the use of both predictive and self-organizing evaluation designs using a cluster evaluation example involving eight complex partnerships in a Canadian provincial health care initiative.
|
|
|
Dynamic Evaluation
|
| Glenda Eoyang,
Human Systems Dynamics Institute,
geoyang@hsdinstitute.org
|
|
A provincial Canadian health service conducted a research project to explore the influence of lateral mentorship and communities of practice on the development of interprofessional practice among allied health professionals. Suspecting that the project would set conditions for self-organizing dynamics, the team chose to implement an evaluation to uncover and articulate patterns that self-organized over time, look for unexpected consequences, investigate relationships among individuals and sites, and system-wide influence of the project over time.
This presentation describes the process of collaborative design, data collection, and analysis used to track and analyze shifts that occurred over the course of the project in multiple sites, levels, and constituencies.
| |
|
Co-evolving Evaluation
|
| Patricia Jessup,
InSites,
pat@pjessup.com
|
|
Over a period of years, InSites has conducted evaluations of foundation-funded programs related to incorporating Asia into the curriculum of K-12 schools in the U.S. In the cluster evaluation of organizations providing study tour programs to Asia for K-12 educators, predictive methods initially were used to look at program outcomes for the educators. Then the focus of the evaluation shifted to self-organizing dynamics related to sustainability of attention to Asia in schools. Evaluation findings were reported to program leaders through means that engaged the leaders in dialogue and bridged previously uncrossed boundaries. Program leaders self-adjusted as did the evaluation team.
This presentation illustrates how reflective questions were used to engage in dialogue about evaluation findings, how the users of the evaluation developed their own understanding of how those findings could shape their work, and how this led to a shift in the evaluation.
| |