eStudy 098: Working with Assumptions to Unravel the Tangle of Complexity, Values, Responsiveness
eStudy 098: Working with Assumptions to Unravel the Tangle of Complexity, Values, Responsiveness

Presenters:

Jonathan Morell, Ph.D. Principal, 4.669 Evaluation and Planning/Editor, Evaluation and Program Planning

Apollo M. Nkwake, CE. Ph.D. International Technical Advisor, Monitoring and Evaluation, Education Development Center

Katrina L. Bledsoe, Ph.D. Research Scientist, Education Development Center/Principal Consultant Katrina Bledsoe Consulting

Description

“One program's overelaboration is another program's clarification.'' ‐Carol H. Weiss

It is impossible for evaluators not to make assumptions that simplify the world in which programs, initiatives and “wicked problems” exist. Simplification—and parsing out to key values—is necessary because without it, no evaluation can reveal relationships that matter. We always need a model that provides a simple and straightforward guide for the construction of evaluation designs and data interpretation. The model may be formal or informal, elaborate or sparse, formally constructed or implicit. But always, there is a model, and always, to be useful, the model must provide a parsimonious explanation of the phenomena—and world—at hand. As do evaluators, so too must program designers grapple with simplification, thus often developing sub-optimal single, or small dimensional designs to address multi-dimensional problems.  Some choices about simplification are necessary and deliberate, but many more are unintentional. Sometimes models are justified by data, at other times, by expertise. Still other times, only by leaps of faith. The problem is not that evaluators and program designers omit what may be important — the inevitability of such omission is a given. The problem is the failure to appreciate what is being simplified. These simplifications fall into three categories: 1) the presence of behavior that characterizes complex systems, 2) cultural context, and 3) values. We do not question our choices, nor do we fully respect the limits of our understanding of those choices. We often do not understand how values and context—particularly cultural context—influence program strategies and outcomes.  We do not recognize that programs may not operate per the dictates of our flawed common sense. We do not admit that it can be adaptive to ignore some assumptions, even though we recognize their importance. The purpose of this workshop is to help evaluators recognize their assumptions, and to make wise choices about which to apply, and when during a program’s life cycle. The E-study’s focus will be on three broad areas in which assumptions regularly affect evaluation: 1) complex behavior in program operations and outcomes, 2) values, and 3) cultural responsiveness.

 

Learning Outcomes

    • To use tools and techniques for identifying assumptions. Examples of these methods include “5-why” questioning combined with root cause diagramming, techniques to capture diverse perspectives, and alternate scenario construction.
    • To appreciate assumptions about the implications of complex behavior in program design and evaluation. Examples include the consequences of maximizing correlated outcomes, shapes of outcome distributions, contextual and cultural biases, and constraints on predictability of program behavior.
    • To recognize frequently made assumptions about program logic. Examples include not considering and/or relationships, and acting as if all elements of a program theory are equally important.
    • To clarify assumptions regarding values, including: 1) appreciating the implications of epistemological choices, 2) the ethical implications of complex behavior, 3) values and beliefs implicit in relationships with stakeholder groups, and 4) beliefs about the purposes of evaluation.
    • To be sensitive to the importance of cultural responsiveness with respect to group affiliation, belief systems, history, and power.
    • To work with theories of change in a manner that accounts for interactions among complex behavior, values, and cultural responsiveness.
    • To systematically integrate a focus on assumptions into program design, monitoring, and evaluation.

     

    Agenda

    This series has three sessions, with each subsequent session building upon the former. However, each session will be designed to be “free-standing” and thus, attendees who only attend some of the sessions will still derive actionable knowledge and information. To promote continuity, each session will begin with a quick overview of what went before.

    First Session: How can assumptions be recognized? January 14, 2019 (12:00-1:30 pm EDT)

    Facilitators walk through scenarios that illustrate applications and use tools and processes that are designed to help reveal and monitor the use of assumptions. We will help participants learn to gauge the tools' and processes' appropriateness, their strenghts and limitations, and when to use which tools. 

    Second Session: The role of assumptions in the design and conduct of evaluation. January 28, 2019 (12:00-1:30 EDT)

    This session will cover assumptions about how programs work and relationships among outcomes; assumptions about program logic; assumptions about evaluation and assumptions about program purpose. 

    Third Session: Assumptions, Values and Cultural Responsiveness. February 11, 2019 (12:00-1:30 pm EDT)

    Discuss the intersection of cultural responsivity and values assumptions in program design and evaluations. Reflect on case studies and scenarios. 

    Additional Considerations

    Practice questions for consideration will be posed to workshop attendees to answer
    Resources and/or reference list will be distributed for each webinar so that attendees might follow-up the content presented
    A Google Discussion Group will be set up for those attendees who wish to chat/exchange ideas in between sessions.

     

    Primary Audience 

    The E-study targets program designers, program managers and evaluators. Prior experience with logic models and theories of change is desirable.

     

    Dates

    January 14, 2019 (12:00-1:30 pm EDT)
    January 28, 2019 (12:00-1:30 pm EDT)
    February 11, 2019 (12:00-1:30 pm EDT)

     

    Facilitators Experience 

    Jonathan A. Morell has designed and constructed workshops on the subjects of logic models, and complexity. Implicit assumptions have been important components in both of these workshop topics. Audiences he has conducted workshops for include the American Evaluation Association, Canadian Evaluation Society, European Evaluation Society, Eastern Evaluation Research Society, and several smaller groups of evaluators in the U.S. and Europe. He has also lectured, consulted, and produced videos on these topics. He has recently completed a major research project designed to help NGOs recognize their assumptions.  There are three foci of his current research: 1) doing evaluation that recognizes the implications of complex behavior (as opposed to complex systems), 2) sensitizing stakeholders to the program implications of complexity, and 3) combining traditional evaluation and agent-based modeling. His recent hands-on evaluation has dealt with close call reporting systems in transportation, programs, efforts to minimize electronic distraction in industrial settings, and the impact of R&D.  Insight into his background and experience can be found at his website, his blog, and his YouTube channel. (www.jamorell.com, https://evaluationuncertainty.com/, and https://www.youtube.com/channel/UCqRIJjhqmy3ngSB1AF9ZKLg).

    Apollo M. Nkwake works as International Technical Advisor on Monitoring and Evaluation (M&E) at Education Development Center. He previously served as associate research professor for M&E at The George Washington University and at Tulane University. He was M&E manager at AWARD and has worked for international agencies across Africa, Asia, and Latin America. He holds a PhD from the University of Cape Town and is a designated Credentialed Evaluator. Dr. Nkwake is a recipient of American Evaluation Association’s 2017 Marcia Guttentag Promising New Evaluator Award. He has authored three books, several journal papers/book chapters and has guest edited two special journal volumes. He is the author of: Credibility, Validity, and Assumptions in Program Evaluation Methodology (2015, Springer), and Working with Assumptions in International Development Program Evaluation (2013, Springer).

    Katrina L. Bledsoe has taught numerous professional development workshops and classes in evaluation, research methods and cultural responsiveness and equity for professionals, graduate and undergraduate students.  Katrina is a trained evaluator, mixed methodologist and social psychologist with 20 + years of experience in local, state and federal government. She is an adjunct faculty member at Claremont Graduate University (CGU), non-University-affiliated Research faculty at the Center for Culturally Responsive Evaluation and Assessment of the University of Illinois, Urbana-Champaign, and a consultant to federal agencies, schools, universities and community-based organizations. Katrina has authored chapters, articles and blogs on evaluation practice, mixed methodology, cultural responsiveness, social psychology and other topics in journals such as the American Journal of Evaluation and New Directions in Evaluation, and is on the editorial board of the New Directions in Evaluation. She is active in AEA, currently serving on the Evaluation Policy Task Force, and having served on the Board of Directors and as a member of the task force that developed the Association’s Public Statement on Cultural Competence in Evaluation.