2010 Banner

Return to search form  

Contact emails are provided for one-to-one contact only and may not be used for mass emailing or group solicitations.

Session Title: Slow Down, You Move to Fast: Calibrating Evaluator Engagement to the Pace of Campaigners and Advocates when Developing Theories of Change
Panel Session 751 to be held in BOWIE C on Saturday, Nov 13, 10:00 AM to 10:45 AM
Sponsored by the Advocacy and Policy Change TIG
Chair(s):
Mary Sue Smiaroski, Oxfam International, marysue.smiaroski@oxfaminternational.org
Abstract: Many development practitioners, especially those engaged in campaigning and advocacy, are activists, with little time for theoretical discussions or in-depth evaluative processes. Yet articulating a theory of change (TOC) with adequate specificity and committing to testing it can assist practitioners in multiple ways – by helping frame real-time strategic reflection in the face of complexity & uncertainty; by making explicit cause and effect assumptions so they can be tested, allowing for course correction and better allocation of resources; and by better meeting accountability obligations and garnering external support. Gabrielle Watson and Laura Roper, will draw from their extensive experience working with advocates and campaigners with Oxfam and other organizations and share how they’ve approached the challenge of engaging with high-octane activists to develop stronger evaluative practice. This will be a highly interactive session and we will look forward learning from participants’ experiences and insights.
Strategic Intuition and Theory of Change: Connecting the Dots for Stakeholders
Laura Roper, Brandeis University, l.roper@rcn.com
It’s been the experience of this evaluator that advocates, even in highly strategic and effective campaigning efforts, are often flummoxed when asked to explain their theory of change and how they intend to test it. Because campaigns are labor intensive, costly, involve a lot of “invisible work” (i.e. managing relationships, building consensus, cultivating media and allies, etc.), and often don’t result in clear policy victories, advocacy teams can find themselves in a vulnerable position in a time of resource constraints in institutions with competing priorities. Yet, M&E is almost always a low priority and even more so under pressure to deliver. Evaluators have to find light, efficient, accessible processes that help advocacy teams unpack and articulate their reality so that they’re active participants in setting the terms on which their work is evaluated. This presentation draws from several experiences with Oxfam and others on how this might be done.
Developing, Testing, and Building Data Systems Around Theory of Change: A Year in the Life of an Embedded Monitoring, Evaluation and Learning Staffer
Gabrielle Watson, Oxfam America, gwatson@oxfamamerica.org
Oxfam America has been working to develop a systematic approach to policy advocacy monitoring, evaluation and learning since 2007. Two campaigns were assessed through end-of-campaign staff debriefs and external summative reviews in 2007 and 2008. But Oxfam recognized that policy advocacy efforts are characterized by high degrees of unpredictability and complexity, that significant shifts in external context are the norm, and that advocates need constant feedback and up-to-the-minute intelligence about shifts in the policy-making environment. Further, managers and external stakeholders need to understand context and the complexity of policy change processes in order to understand the significance of intermediate results and claims of contribution or attribution. But how to do this? Recognizing that this is a classic adaptive challenge, Oxfam embedded a staff person inside its largest campaign, the climate change campaign, to develop MEL systems and tools “from the inside-out”. This presentation shares lessons this experience.

 Return to Evaluation 2010

Add to Custom Program