|
Session Title: Performance Measurement for Policy Advocacy: Insights From Embedded MEL Practitioners
|
|
Panel Session 891 to be held in San Clemente on Saturday, Nov 5, 9:50 AM to 11:20 AM
|
|
Sponsored by the Advocacy and Policy Change TIG
|
| Chair(s): |
| Gabrielle Watson, Oxfam America, gwatson@oxfamamerica.org
|
| Abstract:
Policy advocacy work has gained increasing recognition by Boards and the philanthropic supporters of the non-profit world as an effective way to achieve systemic change benefiting large numbers of people. Along with this growing support comes growing demand to demonstrate results. This has spawned significant innovation and development within the field of policy advocacy evaluation and a specific interest in tools used by the private sector, such as performance measurement. This panel presents the experiences of two organizations that have been using performance measurement of their policy advocacy work. Presenters, each an internal MEL staffer, explore issues such as the tension between strategic learning and accountability and the search for meaningful and manageable indicators. Finally, presenters will explore how performance measurement systems can be used as a way to prompt teams to engage in meaningful conversations that result in collective learning and corrective actions.
|
|
Performance Measures for Advocacy Campaigns: The Search for Usefulness and Practicality
|
| Lisa Hilt, Oxfam America, lhilt@oxfamamerica.org
|
|
Oxfam America embarked on an ambitious effort to create annual policy advocacy campaign plans and institute a quarterly performance measurement system in 2011. The performance measurement system seeks to identify a short set of meaningful and measurable indicators that are relevant across five issue areas, and that can drive strategic conversations and behavior. Lisa Hilt, Campaigns Coordinator at Oxfam America, describes the process of identifying and constructing indicators, designing the quarterly review process, and the lessons about indicators and process gained through this experience. Lisa explores the critical challenge of developing measures that are feasible within the context of a fast-paced and tightly-resourced campaign environment, describing the practical implications of various indicator selections.
|
|
|
Delivering on Demands for Learning and Accountability
|
| Kimberly Bowman, Oxfam GB, kbowman@oxfam.org.uk
|
|
Robust monitoring and evaluation systems can be expected to deliver a lot of things, for many different stakeholders. Management, individuals or project teams might be looking for information to inform or asses their day-to-day work, to provide evidence of progress or to generate the next big strategic insight or program innovation. Meanwhile, donors and marketers have their own information needs. How do we balance the sometimes competing demands for learning and accountability? How do we prioritize which stakeholder gets which needs met, over others? Kimberly Bowman has worked with development organizations in Canada and England, and serves as a Learning & Accountability Advisor to Oxfam GB's UK Campaigns team. In this presentation, she will discuss some of the challenges she has faced in attempting to deliver on the at-times competing demands and expectations for learning and accountability.
| |
|
From Theory of Change to Performance Measures: Two Paths, One Taken
|
| Gabrielle Watson, Oxfam America, gwatson@oxfamamerica.org
|
|
Oxfam America embarked on an ambitious effort to create annual policy advocacy campaign plans and institute a quarterly performance measurement system in 2011. With three campaigns and five 'issue areas', there is a diversity of issues, policy environments, targets, approaches, and indeed, theories of change, underpinning the work. The aim of the performance measurement system is to identify a short set of meaningful and measurable indicators relevant across the five areas, and that will drive strategic conversations and strategic behavior. Gabrielle Watson, Senior Campaign Evaluation Advisor at Oxfam America, describes how one team navigated the tensions between demands for a common set of indicators and the desire for nuanced qualitative analysis of political context and emergent evidence of political access, policy relevance and influence.
| |