This page begins with links to the growing number of portal sites emerging that are providing services to bring a wider range of pm&e sources together. You are already at the Learning for Sustainability pm&e portal, and there are a number of others – particularly focusing on m&e resources and practice. The second section provides annotated links to emerging approaches, tools & methodologies. These portal sites are all good places to start, especially if readers are looking for generic support.
BetterEvaluation An international collaboration to improve evaluation practice and theory by sharing information about options (methods or tools) and approaches. The site aims to guide you through the rapidly expanding range of choices available to you when planning and designing evaluation activities. With BetterEvaluation you can find and discover options and useful resources, share your experiences and learn with peers.
Community sustainability engagement ‘Evaluation Toolbox’ This toolbox aims to provide a one-stop-site for the evaluation of community sustainability engagement projects that aim to change behaviours. Through the toolbox you can learn how to conduct your own evaluation of a behaviour change project using the guides and templates provided. The toolbox is designed to be easy to use and transferable across a range of project types and scales. It also makes a point of referencing and linking to other sources of information.
SEA Change SEA Change is a virtual Community of Practice (CoP) focused on the monitoring and evaluation of climate change interventions in Asia, and beyond. Where the community was initially focused on Southeast Asia – hence the ‘SEA’ in SEA Change – the focus now includes the entire Asian continent in its widest definition. Learning how to design, implement and scale up more effective interventions to address the effects of climate change requires advances in approaches to capture learning, monitoring progress and evaluating achievements.
Emerging approaches, tools & methodologies
Aiming for Utility in ‘Systems-based Evaluation’: A Research-based Framework for Practitioners. This 2015 IDS paper by John Grove looks at the utility of system dynamics modelling (SDM) as a systems-based evaluation (SBE) approach. A system dynamics (SD) model was developed to evaluate the potential requirements and implications on the health systems of the ambitious antiretroviral therapy scale-up strategy in Lusaka, Zambia. Research on SDM for strategic evaluation provided insights and principles for future application of SBE.
Guide to Evaluating Collective Impact. Leaders of collective impact initiatives need an approach to performance measurement and evaluation that is as multi-faceted, responsive, and flexible as the initiatives themselves. This three-part guide by Hallie Preskill, Marcie Parkhurst, Jennifer Splansky Juster offers detailed advice on how to plan for and implement effective performance measurement and evaluation activities in the context of collective impact.
Building a strategic learning and evaluation system for your organization. The aim of this 2013 report by Hallie Preskill and Katelyn Mack acknowledges a need for a more strategic approach to evaluation. In this guide, they provide a framework and set of practices that can help organizations be more systematic, coordinated, and intentional about what to evaluate, when, why, with whom, and with what resources. When fully implemented, these elements work together to ensure that learning and evaluation activities reflect and feed into an organization’s latest thinking.
Strategic learning in practice: Tools to create the space & structure for learning. This 2012 guide by Jewla Lynn recognizes that organizations do not routinely learn unless they are purposeful about creating both the space and the structure for collective dialogue and exchange. Her brief goes on to explore two tools that organizations can use for this purpose: i) Theories of Change that create the structure for learning and function as living documents that are equally relevant to planning, implementation, and learning; and ii) strategic Learning Debriefs that create the space for learning through reflective practice designed to move from learning to action. See also the 2011 report, Evaluation to support strategic learning: principles and practices, by Julia Coffman and Tanya Beer.
Achieving “Collective Impact” with Results-Based Accountability. Results Based Accountability was developed by Mark Friedman, author of Trying Hard Is Not Good Enough.
Investment Logic Mapping (ILM). ILM supports the development of the strongest case for an individual investment. It identifies the major problems that the investment will be required to address, the strategic interventions and solutions that will best respond to the problem identified and the benefits that the investment will be required to deliver.
Evaluation for Strategic Learning: Assessing Readiness and Results. Organizations can apply evaluation for strategic learning at any level, from a single project to an entire organization or network of organizations. As this guide by Anna Williams highlights, the approach is particularly well suited for complex contexts where rapid adaptation and innovation are essential to success. Evaluation for strategic learning attempts to bridge the gap between evaluation and strategy. This approach to evaluation has a specific objective “improving strategy”; in the same way that some evaluations aim to demonstrate impact. Different evaluation approaches, including developmental, formative, and summative evaluations, can be used for strategic learning.
Monitoring and evaluating advocacy. Monitoring and evaluation can shape and transform an advocacy strategy and help ensure results have the maximum effect. This document outlines basic steps in planning monitoring and evaluation for advocacy. it introduces the Bellwether methodology for use in assessing advocacy and other such efforts. Another similar 2014 report from ODI looks at Monitoring and evaluation of policy influence and advocacy . It explores current trends in monitoring and evaluating policy influence and advocacy; discusses different theories of how policy influence happens; and presents a number of options to monitor and evaluate different aspects of advocacy interventions.
To rubrics or not to rubrics? An experience using rubrics for monitoring, evaluating and learning in a complex project. In this Practice Note Samantha Stone-Jovicich shares her experience using an evaluation and monitoring approach called ‘rubrics’ to assess a complex and dynamic project’s progress towards achieving its objectives. Rubrics are a method for aggregating qualitative performance data for reporting and learning purposes.
Evaluative rubrics: a method for surfacing values and improving the credibility of evaluation. This 2013 paper by Julian King, Kate McKegg, Judy Oakden and Nan Wehipeihana aims to share practical evaluation lessons learned from working with rubrics. They highlight how their use helps to provide transparency about the basis on which evaluative judgments had been made, and provide a focus around which stakeholders and evaluators can have robust conversations around values. Their paper describes how to develop them. It also outlines where their use has been found to be most effective.
A guide to social return on investment (SROI) SROI is an approach to understanding and managing the value of the social, economic and environmental outcomes created by an activity or an organisation. As this guide from the SROI network and partners highlights it is based on a set of outcomes-based principles that are applied within a collective framework. It builds on lessons learnt during a three year programme on measuring social value funded in 2008 by the then ‘Office of the Third Sector’ based in the Cabinet Office of the UK Government. In its narrowest sense SROI seeks to include the values of people that are often excluded from markets in the same terms as used in markets, that is money, in order to give people a voice in resource allocation decisions. However, in a wider sense it is a framework to structure thinking and understanding, and broaden our appreciation of a more diverse set of outcomes (economic, social and environmental), and a more diverse set of values that can be identified using mixed methods (e.g. see Sara Olsen’s The problem with SROI on the SKOLL World Forum. More information on the underlying outcomes modeling and theory of change can be found elsewhere on this site.
Evaluation rubrics: how to ensure transparent and clear assessment that respects diverse lines of evidence. In this 2013 article Judy Oakden presents education sector evaluation to illustrate the use of a logic model (to identify boundaries) and rubrics (to make evaluative judgements). The article shows how rubrics offer project stakeholders a process for making explicit the judgements in an evaluation and are used to judge the quality, the value, or the importance of the service provided. Rubrics are made up of: i) evaluative criteria: the aspects of performance the evaluation focuses on; and ii) merit determination: the definitions of what performance looks like at each level.
Strengthening social change through organizational learning and evaluation This paper by Andrew Mott summarises the outcomes from the 2003 Gray Rocks conference on Strengthening Social Change Through Assessment and Learning. The gathering was sponsored by four organizations in the United States, Canada and the United Kingdom, and involved participants from Asia, Africa and Latin America as well as North America and Europe. Well worth reading, and provides some good synthesis of the wealth of experience present. Another version of the report can be found here at the Community Learning Partnership project.
Addressing the Question of Attribution in Evaluation. (Not recent – but still a useful intro) The purpose of this IDRC highlight is to provide suggestions for dealing with the challenge of ‘attribution’ within evaluation. It is designed to offer an overview of some of the key issues and challenges, as well as some suggestions for ‘ways ahead’. It is a synthesis of some of the ideas presented in Alex Iverson’s ‘Attribution and Aid Evaluation in International Development: Literature Review’ (2003) available at http://web.idrc.ca/ev_en.php?ID=32055_201&ID2=DO_TOPIC
The Most Significant Change (MSC) Technique: A Guide to Its Use, a PDF report produced by Rick Davies and Jess Dart in late 2004. This evaluation technique was originally developed by Rick Davies in 1993 as a means of participatory impact monitoring. The MSC approach involves the collection and “systematic participatory interpretation” of stories of change. It has been widely used in the monitoring of aid projects throughout the developing world but its use is also expanding into government and corporate areas as the value of a dialogue based technique becomes appreciated. Latest development:
Clear Horizon publications page Jess Dart hosts a number of her publications covering Most Significant Change and other evaluation approaches on her Australian-based site.
Towards improving the role of evaluation within natural resource management R&D programmes: The case for learning by doing This paper by Will Allen discusses how the increasing use of participatory development approaches in recent years pose new challenges for decision-makers and evaluators. Because these programmes are designed to be responsive to changing community needs, one of the most pressing challenges is to develop participatory and systems-based evaluative processes to allow for ongoing learning, correction, and adjustment by all parties concerned. This paper outlines one such evaluation process, and uses a case study in New Zealand to illustrate its benefits in the light of current issues facing both evaluators and natural resource managers.
Policy and plan effectiveness monitoring Policy and plan effectiveness monitoring can signal the need for future action and provides information on possible improvements to policy and plan content and implementation. This page has been developed by Karen Bell and Leigh Robcke for the New Zealand-based Quality Planning website. Contains advice and links to a number of international resources that help with policy monitoring.
The Sourcebook for Evaluating Global and Regional Partnership Programs. Global and regional partnership programs represent collective action to achieve common development objectives that program partners can achieve more efficiently by working together. This report is a free-standing document which builds on principles and standards for evaluating these programs. It has been developed by the OECD/DAC Evaluation Network, the United Nations Evaluation Group, the Evaluation Cooperation Group of the Multilateral Development Banks, evaluation associations, and others.
M&E for learning and evaluation: The managing for impact approach There are increasing calls for new M&E approaches that encourage learning and participation. In this article Jim Woodhill and Mine Pabari explain how the managing for impact approach places M&E at the centre of learning and management processes.
Contemporary evaluation approaches routinely promote the involvement of a wide range of stakeholders, employing methods that allow a more equal opportunity for the expression of views and sharing of lessons. Social learning and empowerment are based on each other. Empowerment is the process of enhancing the capacity of individuals or groups to make choices and to transform those choices into desired actions and outcomes.These approaches are not just used in community projects, but are now mainstream within organisations, institutions and other agencies.