Complex projects require complexity-aware M&E approaches to highlight blind spots and adapt to changing conditions. Developing a Theory of Change (ToC) and using impact pathways (or logic models) can help facilitate collaboration and planning across different timeframes and scales. However, in complex settings, this requires shifting from attribution to contribution, explicitly recognising the roles of partners and other actors in achieving and demonstrating outcomes.
In complex systems, cause-and-effect relationships are often unpredictable, leading to unexpected positive and negative outcomes. While logic models help ensure actions align with intentions, they should be used as flexible guides rather than rigid roadmaps. Supporting complex programmes—whether in agriculture, public health, natural resource management, or energy—requires monitoring, evaluation, and learning (MEL) strategies that embrace uncertainty. Complexity-aware M&E (CAME) techniques can help identify unintended outcomes, multiple pathways to impact, and different causes of change.
By embedding continuous learning into programme management, stakeholders can adapt and respond to emerging insights and challenges. This approach strengthens effectiveness while acknowledging the multifaceted nature of change in complex environments. The resources below provide insights into the theories, approaches, and tools underpinning complexity-aware monitoring, evaluation, and learning (CAME/CAMEL).
Complexity-aware evaluation for learning
This 2025 paper by Eureta Rosenberg and colleagues examines the design and implementation of a complexity-aware, learning-centred M&E framework within a broad development and environmental programme in the Olifants River Basin. Using a participatory, developmental approach, it explores challenges, adaptations, and learning outcomes. Findings highlight how working with standard M&E elements in new ways—through reflection days, case studies, and novel reporting—supported organisational learning while balancing accountability needs.
Complexity-Aware Monitoring and Evaluation
This 2021 paper by Tilman Hertz, Eva Brattander, and Loretta Rose explores the elements of a complexity-aware M&E system for sustainable development programmes. The authors argue that traditional M&E approaches relying on rigid Theories of Change fail to capture dynamic, unpredictable systems. They propose an iterative, real-time approach that fosters flexibility, experimentation, and innovation, helping donors and implementers track emergent change rather than rigid pre-determined plans.
Introducing CAMEL: Counterpart’s Complexity-Aware Monitoring, Evaluation, and Learning Framework
Counterpart’s CAMEL framework integrates complexity-aware monitoring (C-AM) with Collaboration, Learning, and Adaptability (CLA) to support development programmes in complex environments. Used in the 2019 USAID-funded PRG-PA project in Niger, CAMEL employs sentinel indicators to track significant changes. Regular pause-and-reflect sessions allow adaptive management. The framework draws on Ralph Stacey’s Agreement and Certainty Matrix and Michael Bamberger’s Complexity Checklist, emphasising flexibility, continuous learning, and stakeholder engagement.
Discussion Note: Complexity Aware Monitoring
This 2016 USAID discussion note outlines principles and approaches for monitoring complex aspects of development programmes. It discusses five key complexity-aware monitoring approaches: sentinel indicators, stakeholder feedback, process monitoring of impacts, most significant change, and outcome harvesting. These methods help address dynamic contexts and unclear cause-effect relationships, supporting adaptive decision-making.
Twinning “Practices of Change” With “Theory of Change”: Room for Emergence in Advocacy Evaluation
This 2017 paper by Bodille Arensman, Cornelie van Waegeningh, and Margit van Wessel argues that Theories of Change (ToC) often oversimplify complex interventions like advocacy. Instead, they propose focusing on “practices of change”, which centre on human interactions and emergent outcomes. The paper advocates for strategies-as-practice and recursiveness to better reflect the reality of complex, adaptive change.
Performance monitoring’s three blind spots
This 2013 presentation by Ricardo Wilson-Grau highlights three common blind spots in performance monitoring: unintended results, alternative causes, and multiple pathways. It emphasises that linear models often fail to capture how change happens in complex systems, making alternative evaluation approaches essential.
Contribution analysis: A new approach to evaluation in international development
This paper by Fiona Kotvojs and Bradley Shrimpton explores AusAID’s shift to contribution analysis, which moves beyond attribution-based evaluation. Using the Fiji Education Sector Program as a case study, it examines how contribution analysis helps assess programme impact while addressing causality and complexity.
Complex adaptive systems: a different way of thinking about health care systems
This 2004 paper by Beverly Sibthorpe and colleagues provides a brief introduction to complex adaptive systems (CAS) thinking in healthcare. It underscores the importance of M&E frameworks that account for feedback loops, tipping points, and emergent change, rather than relying on linear models.
Appreciating the recursive and emergent nature of Complex Adaptive Systems
Strategy-as-Practice: Taking Social Practices Seriously
This 2012 paper by Eero Vaara and Richard Withington explores strategy-as-practice (SAP) and how it explains strategy-making within broader social and organisational practices. The authors propose five directions for further development: recognising macro-institutional influences, focusing on emergence, considering material influences, and promoting critical analysis.
Using programme theory to evaluate complicated and complex aspects of interventions
This 2008 paper by Patricia Rogers outlines how programme theory can help evaluate complex interventions. It highlights methods for representing recursive causality, reinforcing loops, disproportionate relationships (tipping points), and emergent outcomes, making it a valuable tool for complexity-aware evaluation.
For further insights into complexity-aware monitoring, evaluation, and learning, explore related site pages on planning, monitoring & evaluation, Theory of Change, and logic models. You may also find rubrics useful for assessing complex initiatives. These resources support adaptive management, learning-based evaluation, and outcome-focused planning, helping to navigate complexity and enhance impact.