Evaluation-based methods and approaches
Evaluation approaches provide different ways to think about, design, and conduct evaluation efforts. Certain evaluation approaches help solve problems; others refine and build on existing approaches, still others show different ways that different methods and tools can be linked together. This page begins with annotated links to a range of evaluative approaches, tools & methodologies. The second section links to portal and journal sites focusing on M&E resources and practices. More links to Planning, Monitoring and Evaluation resources can be found from the site index above.
Evaluation Checklists Web site
This site provides evaluation specialists and users with refereed checklists for designing, budgeting, contracting, staffing, managing, and assessing evaluations of programs, personnel, students, and other evaluands; collecting, analyzing, and reporting evaluation information; and determining merit, worth, and significance. Each checklist is a distillation of valuable lessons learned from practice.
A Field Guide to Ripple Effects Mapping
This participatory data collection method is designed to capture the impact of complex programs and collaborative processes. Well-suited for evaluating group-focused efforts, Ripple Effects Mapping involves aspects of Appreciative Inquiry, mind mapping, facilitated discussion, and qualitative data analysis. As the REM process unfolds, the intended and unintended impacts of participant efforts are visually displayed in a way that encourages discussion and engagement.
Outcome Mapping Learning Community
Outcome mapping (OM) is a methodology for planning and assessing development programming that is oriented towards change and social transformation. OM provides a set of tools to design and gather information on the outcomes, defined as behavioural changes, of the change process. OM helps a project or program learn about its influence on the progression of change in their direct partners, and therefore helps those in the assessment process think more systematically and pragmatically about what they are doing and to adaptively manage variations in strategies to bring about desired outcomes.
Making sense of complexity: Using sensemaker as a research tool
This 2019 paper by Van der Merwe and colleagues provides an introduction to guide researchers in choosing when to use SenseMaker and to facilitate understanding of its execution and limitations. The Cognitive Edge SenseMaker® tool is one method for capturing and making sense of people’s attitudes, perceptions, and experiences. It is used for monitoring and evaluation; mapping ideas, mind-sets, and attitudes; and detecting trends and weak signals.
RCA Community of Practice
This link leads to the website of the RCA Community of Practice. The Reality Check Approach (RCA) is a qualitative research approach involving RCA trained researchers living with people in their own homes and sharing in their everyday lives. The intention is to have unmediated conversations, observations, and experiences with people (in their own space and time) as they go about their daily lives. It is primarily immersive research based on the principles of ethnography but its narrower focus (on relevance, usability, for example) and the short time for immersions distinguish it from ethnography.
The Value for Investment approach
The Value for Investment approach was developed by Julian King and colleagues to bring clarity to assessing value for money. Value for Investment brings together multiple values (e.g. social, cultural, environmental and economic) and multiple sources of evidence (qualitative and quantitative) to gain a nuanced understanding of program costs, processes, consequences and value.
The ‘Most Significant Change’ (MSC) Technique: A Guide to Its Use
This 2005 guide by Rick Davies and Jess Dart outlines MSC as a form of participatory monitoring and evaluation. It is participatory because many project stakeholders are involved both in deciding the sorts of changes to be recorded and in analysing the data collected. Essentially, the process involves the collection of significant change (SC) stories emanating from the field level and the systematic selection of the most significant of these stories by panels of designated stakeholders or staff. It contributes to evaluation because it provides data on impact and outcomes that can be used to help assess the performance of the program as a whole.
Portals and journals
An international collaboration to improve evaluation practice and theory by sharing information about options (methods or tools) and approaches. The site aims to guide you through the rapidly expanding range of choices available to you when planning and designing evaluation activities. With BetterEvaluation you can find and discover options and useful resources, share your experiences and learn with peers.
Canadian Journal of Program Evaluation
The Canadian Journal of Program Evaluation is published three times a year by the Canadian Evaluation Society. The journal seeks to promote the theory and practice of program evaluation. To this end, CJPE publishes full-length articles on all aspects of the theory and practice of evaluation and shorter Evaluation Practice Notes which share practical knowledge, experiences and lessons learned. The journal has a particular interest in articles reporting original empirical research on evaluation. CJPE is a completely bilingual journal, publishing in both English and French. Readership includes academics, practitioners and policymakers. CJPE attracts authors and readers internationally.