Worldwide there is a trend towards an increased use of indicators to monitor development and track progress. This is evident at all levels and is reflected in the proliferation of indicator reports in recent years. Indicators quantify and simplify phenomena, and help us understand and make sense of complex realities. Within natural resource management their greatest strength is in the way they can help us assess resource status and monitor performance effectiveness. To be most meaningful, a monitoring programme should provide insights into cause-and-effect relationships between environmental or socio-economic stressors and the anticipated ecosystem responses and subsequent social and economic outcomes.
As the linked resources on this page highlight reviewers of effective indicator reporting processes highlight the importance of using a conceptual framework and models to guide the development of a set of indicators. These frameworks and models provide a formal way of thinking about a topic area and help us build a coherent set of indicators for any particular system. They help to ensure the selection of indicators is relevant and balanced, and illustrate the complicated links between indicators. They also provide a useful device for organising and reporting on indicators in a structured and meaningful way. The absence of a framework can result in the generation of an eclectic mix of indicators, with no clear rationale for their selection.
Effective indicators for freshwater management: attributes and frameworks for development Indicators quantify and simplify phenomena, and help us understand and make sense of complex realities. However, their greatest strength is in aiding management. To be useful, indicators must be embedded in a monitoring and evaluation (M&E) system that is seen as an integral component of a the wider management and decision making system. This 2012 report by Will Allen, Andrew Fenemor and David Wood outlines key steps for indicator-based reporting. These include include involving the right people, and clarifying with them the purpose, scope and scale of the management system under consideration. Developing a conceptual framework and models is crucial to to both aid shared understanding and to identify what needs to be evaluated. Within natural resource mangement these tend to either be programme-outcome-based or driver/pressure-based. Irrespective of which framework is chosen the report notes that it will still be important to provide three sets of supporting information to underpin the utility and transparency of the subsequent models: i) scoping and planning; ii) well-documented underpinning assumptions; and iii) internal and external factors that influence outcomes. Attention is also paid to indicator chanracteristics, and the capacities and systems required to support collaborative adaptive management.
A conceptual framework for selecting environmental indicator sets In recent years, environmental indicators have become a vital component of environmental impact assessments and ‘state of the environment’ reporting. This has increased the influence of environmental indicators on environmental management and policy making at all scales of decision making. However, as David Niemeijer and Rudolf Groot point out in this 2008 paper the scientific basis of the selection process of the indicators used in environmental reporting can be signicantly improved. In many studies no formal selection criteria are mentioned and when selection criteria are used they are typically applied to indicators individually. Often, no formal criteria are applied regarding an indicator’s analytical utility within the total constellation of a selected set of indicators. As a result, the indicator selection process is subject to more or less arbitrary decisions, and reports dealing with a similar subject matter or similar geographical entities may use widely different indicators and consequently paint different pictures of the environment. In this paper, a conceptual framework for environmental indicator selection is proposed that puts the indicator set at the heart of the selection process and not the individual indicators. To achieve this objective, the framework applies the concept of the causal network that focuses on the inter-relation of indicators. The concept of causal networks can facilitate the identification of the most relevant indicators for a specific domain, problem and location, leading to an indicator set that is at once transparent, effcient and powerful in its ability to assess the state of the environment.
Evaluation rubrics This BetterEvaluation page illustrates how rubrics work using a rating scale to assess group process capacity in dryland community groups – to track their progress and to focus planning for the next stage of the project. In this way a rubric clearly sets out criteria and standards for assessing different levels of project and/or individual (or group or organizational) performance. Rubrics have often been used in education for grading student work, and in recent years have been applied more widely in evaluation to make transparent the process of synthesising evidence into an overall evaluative judgement.
Evaluation rubrics: how to ensure transparent and clear assessment that respects diverse lines of evidence In this 2013 article Judy Oakden presents education sector evaluation to illustrate the use of a logic model (to identify boundaries) and rubrics (to make evaluative judgements). The article shows how rubrics offer project stakeholders a process for making explicit the judgements in an evaluation and are used to judge the quality, the value, or the importance of the service provided. Rubrics are made up of: i) evaluative criteria: the aspects of performance the evaluation focusses on; and ii) merit determination: the definitions of what performance looks like at each level.
Incorporating local sustainability indicators into structures of local governance: a review of the literature Too often studies about sustainability indicators focus either on the science that goes into indicator development seeking to make them rational and relevant or on the soft impacts such as social capital, community empowerment or capacity building that are outcomes of their use. When attention is turned to what effect they have on policy, it is often difficult to discern any link between their use and policy change. This 2009 paper by Nancy Holman seeks to address this problem by consolidating current thinking on indicators and asking the question: How far have notions of governance been incorporated into current research into indicators? The answer to this question has implications for the continuing utility of indicators as policy tools, not only in so far as they are able to aid the evaluation of policy, but also, and arguably more importantly, in how they are able to facilitate relationships between actors and act a catalyst around which various contested meanings of sustainability can be evaluated.
Monitoring and evaluation in conservation a review of trends and approaches This paper from Caroline Stem, Richard Margoluis, Nick Salafsky and Marcia Brown (2005) notes the growing recognition among conservation practitioners and scholars that good project management is integrally linked to well-designed monitoring and evaluation systems. They also observe that in practice the results are mixed, with practitioners failing to learn from each other. This paper reviews monitoring and evaluation approaches in conservation and other fields including international development, public health, family planning, education, social services, and business. The results are presented here for the field of conservation. They suggest that the conservation community continue support of collaborative initiatives to improve monitoring and evaluation, establish clear definitions of commonly used terms, clarify monitoring and evaluation system components, apply available approaches appropriately, and include qualitative and social variables in monitoring efforts.
Good practice guidelines for the development and reporting of indicators To help ensure the integrity, quality and transparency of indicator reports, Statistics New Zealand (2009) has produced a set of good practice guidelines. The guidelines are a web-based product and provide practical advice on the factors that should be addressed in developing indicator reports and the characteristics of good practice. They also include case studies and links to relevant resources where further information can be obtained. The guidelines are structured into five sections which represent the five main stages in the development and reporting of indicators: i) Establishing the purpose of the indicators; ii) Designing the conceptual framework; iii) Selecting and designing the indicators; iv) Interpreting and reporting the indicators; and v) Maintaining and reviewing the indicators. This report summarises the characteristics of good practice associated with each of these stages in indicator development and reporting, and illustrates them with case studies of indicator initiatives in New Zealand and Australia.
You may also be interested in a related page in the knowledge management section with links on how best to develop conceptual models. Another related page in this section provides a number of links outlining how to develop programme-based outcomes models, also called intervention logic models.