PM&E – a range of approaches

This page  provides annotated links to some emerging approaches, tools & methodologies.


3 Best Methods to Evaluate Training Effectiveness
This 2019 post by Jonathan Deller briefly introduces three of the most widely used methods of evaluating training effectiveness and how to use them. The three approaches are: The Kirkpatrick Taxonomy, The Phillips ROI Model, and the CIPP Evaluation Model.


Understanding and misunderstanding randomized controlled trials
This 2018 paper by  Angus Deaton and Nancy Cartwright argues that the lay public, and sometimes researchers, put too much trust in RCTs over other methods of investigation. Demanding ‘external validity’ is unhelpful because it expects too much of an RCT while undervaluing its potential contribution. RCTs do indeed require minimal assumptions and can operate with little prior knowledge. This is an advantage when persuading distrustful audiences, but it is a disadvantage for cumulative scientific progress, where prior knowledge should be built upon, not discarded. RCTs can play a role in building scientific knowledge and useful predictions but they can only do so as part of a cumulative program, combining with other methods, including conceptual and theoretical development, to discover not ‘what works’, but ‘why things work’.

All That Glitters Is Not Gold. The Political Economy of Randomised Evaluations in Development. This 2019 paper by Florent Bédécarrats and colleagues argues that Randomised Control Trials (RCTs) have a narrow scope, restricted to basic intervention schemes – but are still advertised as the gold standard to evaluate development policies. This paper takes a political economy angle to explore this paradox. It argues that the success of RCTs is driven mainly by a new scientific business model based on a mix of simplicity and mathematical rigour, media and donor appeal, and academic and financial returns.

What role should randomized control trials play in providing the evidence base for conservation? This 2019 by Edwin Pynegar and colleagues ackowledges the need for improved evaluation of conservation interventions. As many conservation interventions depend on changing people’s behaviour, the authors ackowledge that conservation impact evaluation can learn a great deal from RCTs in fields such as development economics, where RCTs have become widely used but are controversial. They build on relevant literature from other fields to discuss how RCTs, despite their potential, are just one of a number of ways to evaluate impact, are not feasible in all circumstances, and how factors such as spillover between units and behavioural effects must be considered in their design. They offer guidance and a set of criteria for deciding when RCTs may be an appropriate approach for evaluating conservation interventions.

See also RCTs Are Not (Always) the Answer and Why the ‘gold standard’ of medical research is no longer enough.


Aiming for Utility in ‘Systems-based Evaluation’: A Research-based Framework for Practitioners
This 2015 IDS paper by John Grove looks at the utility of system dynamics modelling (SDM)  as a systems-based evaluation (SBE) approach. A system dynamics (SD) model was developed to evaluate the potential requirements and implications on the health systems of the ambitious antiretroviral therapy scale-up strategy in Lusaka, Zambia. Research on SDM for strategic evaluation provided insights and principles for future application of SBE.


Guide to Evaluating Collective Impact
Leaders of collective impact initiatives need an approach to performance measurement and evaluation that is as multi-faceted, responsive, and flexible as the initiatives themselves. This three-part guide by Hallie Preskill, Marcie Parkhurst, Jennifer Splansky Juster offers detailed advice on how to plan for and implement effective performance measurement and evaluation activities in the context of collective impact.


Building a strategic learning and evaluation system for your organization
The aim of this 2013 report by Hallie Preskill and Katelyn Mack acknowledges a need for a more strategic approach to evaluation. In this guide, they provide a framework and set of practices that can help organizations be more systematic, coordinated, and intentional about what to evaluate, when, why, with whom, and with what resources. When fully implemented, these elements work together to ensure that learning and evaluation activities reflect and feed into an organization’s latest thinking.


Strategic learning in practice: Tools to create the space & structure for learning
This 2012 guide by Jewla Lynn recognizes that organizations do not routinely learn unless they are purposeful about creating both the space and the structure for collective dialogue and exchange. Her brief goes on to explore two tools that organizations can use for this purpose: i) Theories of Change that create the structure for learning and function as living documents that are equally relevant to planning, implementation, and learning; and ii) strategic Learning Debriefs that create the space for learning through reflective practice designed to move from learning to action. See also the 2011 report, Evaluation to support strategic learning: principles and practices, by Julia Coffman and Tanya Beer.


Achieving “Collective Impact” with Results-Based Accountability
Results Based Accountability was developed by Mark Friedman, author of Trying Hard Is Not Good Enough.


Investment Logic Mapping (ILM)
ILM supports the development of the strongest case for an individual investment. It identifies the major problems that the investment will be required to address, the strategic interventions and solutions that will best respond to the problem identified and the benefits that the investment will be required to deliver.


Evaluation for Strategic Learning: Assessing Readiness and Results
Organizations can apply evaluation for strategic learning at any level, from a single project to an entire organization or network of organizations. As this guide by Anna Williams highlights, the approach is particularly well suited for complex contexts where rapid adaptation and innovation are essential to success. Evaluation for strategic learning attempts to bridge the gap between evaluation and strategy. This approach to evaluation has a specific objective “improving strategy”; in the same way that some evaluations aim to demonstrate impact. Different evaluation approaches, including developmental, formative, and summative evaluations, can be used for strategic learning.


Monitoring and evaluating advocacy
Monitoring and evaluation can shape and transform an advocacy strategy and help ensure results have the maximum effect. This document outlines basic steps in planning monitoring and evaluation for advocacy. it introduces the Bellwether methodology for use in assessing advocacy and other such efforts. Another similar 2014 report from ODI looks at Monitoring and evaluation of policy influence and advocacy . It explores current trends in monitoring and evaluating policy influence and advocacy; discusses different theories of how policy influence happens; and presents a number of options to monitor and evaluate different aspects of advocacy interventions.


Strengthening social change through organizational learning and evaluation
This paper by Andrew Mott summarises the outcomes from the 2003 Gray Rocks conference on Strengthening Social Change Through Assessment and Learning. The gathering was sponsored by four organizations in the United States, Canada and the United Kingdom, and involved participants from Asia, Africa and Latin America as well as North America and Europe. Well worth reading, and provides some good synthesis of the wealth of experience present.


Towards improving the role of evaluation within natural resource management R&D programmes: The case for learning by doing
This paper by Will Allen discusses how the increasing use of participatory development approaches in recent years pose new challenges for decision-makers and evaluators. Because these programmes are designed to be responsive to changing community needs, one of the most pressing challenges is to develop participatory and systems-based evaluative processes to allow for ongoing learning, correction, and adjustment by all parties concerned. This paper outlines one such evaluation process, and uses a case study in New Zealand to illustrate its benefits in the light of current issues facing both evaluators and natural resource managers.


The Sourcebook for Evaluating Global and Regional Partnership Programs
Global and regional partnership programs represent collective action to achieve common development objectives that program partners can achieve more efficiently by working together. This report is a free-standing document which builds on principles and standards for evaluating these programs. It has been developed by the OECD/DAC Evaluation Network, the United Nations Evaluation Group, the Evaluation Cooperation Group of the Multilateral Development Banks, evaluation associations, and others.


Contemporary evaluation approaches routinely promote the involvement of a wide range of stakeholders, employing methods that allow a more equal opportunity for the expression of views and sharing of lessons. Social learning and empowerment are based on each other. Empowerment is the process of enhancing the capacity of individuals or groups to make choices and to transform those choices into desired actions and outcomes.These approaches are not just used in community projects, but are now mainstream within organisations, institutions and other agencies.

 

Share