2017 in review – your favorite LfS content

 

Another year has passed, and this provides a good opportunity to reflect on what content resonated most with visitors on the Learning for Sustainability (LfS) site in 2017.  So – based on the site statistics* – here are the most visited pages in terms of topic areas, blog posts and downloads of hosted content.

The Learning for Sustainability (LfS) website operates as an international clearinghouse for on-line resources around collaboration, social learning and adaptation.  As you can see from the navigation bar above it provides pages of annotated links pointing to targeted resources on a range of relevant and interlinked topics. During 2017 the site averaged more than 900 visits* each day.  Managing collaborations, complex problems, Theory of Change, systems thinking, and reflective practice featured heavily in the most requested content.

Most visited resource pages

While the LfS home page remained the preferred choice of entry for most, the three resource pages that were most visited were:

  • Theory of change (ToC). This page provides links to guides for using ToC – a methodological approach for planning, participation, and evaluation.  It shows how its use can help orient diverse program stakeholders to work together and plan for outcomes by envisaging a ‘big picture’ view of how and why a desired change is expected to happen in a particular context.
  • Systems thinking.  This page points to sites providing toolkits and tools to support systems thinking. It encourages practitioners to understand and analyse the contexts within which they operate, as a precursor to designing programs/policies that can adapt as conditions on the ground change.
  • Selecting evaluation questions and types. This page provides guides to help program managers to develop appropriate evaluation questions that are driven by funders, project participants and other key stakeholders. Further links highlight how different evaluation types (and/or methods) are distinguished by the nature of the questions they attempt to answer.

Most read LfS posts

Some of our most popular and engaging content in 2017 was, not surprisingly, blog posts that provide introductory material that grounds key topic areas. Check out these posts that were most popular with readers in the last year.

Popular downloaded papers and reports

While the site primarily operates as a clearinghouse to on-line material hosted on sites all over the world, it does host a range of papers and reports. The three most downloaded media were as follows:

  •  How Decision Support Systems can benefit from a Theory of Change approach.  This 2017 research paper begins by describing a ToC and how it can be used in conjunction with DSS development. We then illustrate how to apply a ToC approach using a pest (rabbit) management example in Australia. We end with a discussion of potential benefits and challenges from using the approach.
  • Stakeholder analysis. This 2010 book chapter reminds us that a stakeholder analysis is just one (albeit usually the first) step in building the relationships needed for the success of a participatory project or policy.  It covers steps in conducting such an analysis, and then outlines some best practice guidelines.
  • Building resilience in rural communities.  This 2008 report aims to provide a toolkit outlining ideas and information  that could be included in new or existing social programs. It introduces and expands on 11 resilience concepts found to be pivotal in enhancing individual and community resilience.

 

* Using AW stats

Share

Social learning – what it looks like

Social learning - 123RF- - 41266812
Social learning emphasizes the importance of taking time to pause and engage in constructive dialogue. [Copyright: rawpixel / 123RF Stock Photo]
Social learning is an approach to working on complex environmental problems, particularly those with high degrees of uncertainty, lots of interested parties and disagreement around causes, effects and even desired outcomes. There is no one definition of social learning, but the many descriptions of it emphasize the importance of dialogue between groups.  These negotiations help people to better understand different points of view, and develop processes for collective action and reflection over time. This post provides a brief introduction to the concept – more information and links to a wealth of online material about it can be found directly from the LfS social learning page. There are also a number of links through this post that will take you directly to pages that explore different aspects of social learning.

Just to begin, social learning is not what many people confuse it as – learning by people ‘out there’ about the important things we ‘in here’ think they should know! That is information dissemination, advice …. or even indoctrination. At times we may feel it necessary to directly tell people ‘how it is’ – but this more linear approach to communication should not be confused with social learning.

The concept of social (or collaborative) learning refers to learning processes among a group of people who seek to improve a common situation and take action collectively. This understanding effectively extends experiential learning into social learning. By broadening their perspectives and taking collective action – people can become empowered. Empowerment, in this sense, can be seen as enhancing the capacity of individuals or groups to make choices, and to transform those choices into desired actions and outcomes. Achieving such outcomes requires a long-term social process that evolves over time and is woven from a number of different activity strands. These strands can be thought of in different ways; e.g. this site highlights five key strands that underpin social learning:  i) systems thinking; ii) network building; iii) dialogue; iv) knowledge management; and v) reflective practice. Information on the way that these different strands that support social learning can be accessed through the ‘social learning’ index above.

Social Learning, as an approach to complex problem solving, has emerged in recent years alongside other approaches such as adaptive management and systems thinking. In fact it includes the core essentials of both of these. In a nutshell it is about creating situations where people can learn collectively to improve a situation. The aim of adaptive management is to enable groups and organizations to adapt their practices and learn in a systematic way, often referred to as ‘learning by doing’. The kind of thinking required is essentially systems thinking. This is about having an appreciation of the characteristics of systems, i.e., that each element will affect the operation of the whole, parts of the system are interdependent etc. The focus of systems thinking is therefore on interaction. Furthermore, systems thinking requires a shift of mind, a willingness to look at problems from different perspectives. It looks at underlying systemic structures; and encourages people to look beyond discrete events at underlying patterns of behavior and underpinning mental models. The aim of system thinking based inquiry is to seek leverage, seeing where actions and changes in structures can lead to significant and enduring improvements.

The point is that social learning would not be ‘social’ if it was not about people and their interactions. Because we are dealing with complex issues that arise from settings with many stakeholders with differing views, responsibilities, and knowledge about the system (including science, management agencies and people making decisions on-the-ground), social learning has to be about how to bring people together. Particularly it is about helping people work collaboratively – bridging disciplines, knowledge systems and cultures.  By keeping these concepts in mind we can aim to manage more interactions within participatory and learning-based contexts to help those involved to engage in social learning and develop a shared understanding around goals, actions and indicators.

Share

After action reviews – and how they can be linked with ToCs to support strategic thinking

This post introduces After Action Reviews (AARs), and indicates how theories of change and AARs can be used in tandem to create both the space and guidance for strategic learning, and subsequent adaptation and innovation.

The importance of reflecting on what you are doing, as part of the learning process, has been emphasized by many reviewers. Building the capacity to reflect on action so as to engage in a process of continuous learning is increasingly seen as an important aspect of behavior change, and it is beginning to be used in many models of changing professional practice. However, it is not a conscious behavior for many teams, and effort needs to be put in to provide teams with tools that can support reflection. These tools are usually known under names such as After Action Review (AAR) or Learning Debriefs, and are used by to capture the lessons learned from past successes and failures, with the goal of improving future performance.

In terms of evaluation they are starting to be paired with design processes such as program theory-based approaches. Developing results-based framework diagrams can help with structuring and deepening participants thinking and planning (both in developing action plans and monitoring plans) for a project. Then after these plans have been implemented (action), an AAR can provide people with a platform and space to reflect on what happened and what they are learning about their project.

aar-questions
Four sets of questions that drive After Action Reviews (AARs)

AAR is a form of group reflection; participants review what was intended (activity aims), what actually happened (intended and unintended outcomes), why it happened and what was learned. AARs should contribute to information that not only helps assess an immediate activity, but also helps assess progress towards longer-term term progress set out in the underlying Theory of Change (TOC) and accompanying logic models. AARs can be used in short, frequent group process checks, or more extended, in-depth explorations to assess progress at the wider level. It can be undertaken by a group or an individual as they ask themselves four questions:

  • What was supposed to happen? By asking each participate about their expectations it can sometimes highlight problems in communication as individuals have different expectations.
  • What actually happened? By identifying what went on an accurate picture can be built up.
  • Why was there a difference? This is where participants need to concentrate on the what – and not the who – between expectations and what actually happened.
  • What can we learn from this? What learning points have been identified so that the organisation or individual continues to improve?

AARs can be conducted almost anywhere, and will vary in length. For example, an AAR can be conducted after a one-off event (in 15 minutes or so), or a much more focused meeting could be held to reflect on how planned intermediate and longer-term outcomes are being achieved at an organizational or program level. It is important that the underlying culture of those undertaking an AAR is one of openness and learning. It is not about allocating blame, but ensuring that those involved can move forward (and adapt where necessary). Lessons are not only shared by the individuals involved but can be documented and shared with a wider audience.

Every organization, every partnership or team, and every intervention will likely require different levels of preparation, execution, and review. However, a number of best practices do emerge across the literature:

  • Lessons must first and foremost benefit the team that develops them. The AAR process must start at the beginning of the activity (with clear aims set out in advance through a TOC or outcomes modeling process). AAR Lessons must link explicitly to future actions, that support desired outcomes. And leaders must hold everyone, especially themselves, accountable for learning.
  • Managers and facilitators should phase in an AAR culture. This can begin with facilitating simple AARs around team projects – keeping things simple at first and developing the process slowly—adding knowledge-sharing activities and systems, richer metrics, and other features dictated by the particular initiatives in question.
  • If there are issues with either openness or time, it may be worthwhile to gather individual ideas first – and then facilitate a group discussion.

By creating tight feedback cycles between planning and action, AARs build team and organizational capacity to succeed in a variety of conditions. AARs are not just a source of lessons, but provide a low-technology tool which teams and communities can use on how to draw new lessons from novel and evolving situations for which they did not train—situations they may not even have imagined. In a fast-changing environment, the capacity to learn lessons and adapt can be more valuable than any individual lesson learned. That capacity  -which can be used for adaptive management or for innovation –  is what can be gained by more closely linking outcome planning and learning-based reflective activities such as AARs.

A number of other Learning for Sustainability pages provide additional information on this topic. The page on selecting evaluation questions and types directly builds on this topic and includes links to additional external resources. A number of other pages are introduced through to the introductory page titled Planning, monitoring & evaluation – closing the loop.

Share

Using rubrics to assess complex tasks and behaviors

Developing rubrics - involve people in brainstorming criteria to be assessed
Developing rubrics – involve people in brainstorming criteria to be assessed

A rubric is an easily applicable form of assessment. They are most commonly used in education, and offer a process for defining and describing the important components of work being assessed. They can help us assess complex tasks (e.g. essays or projects) or behaviors (e.g. collaboration, team work). Increasingly rubrics are being used to help develop assessments in other areas such as community development and natural resource management.

In a recent paper we describe how rubrics are being used to help collaboration in an integrated research programme – Bridging disciplines, knowledge systems and cultures in pest management.

Although the format of a rubric can vary, they all have two key components:

  • A list of criteria – or what counts in an activity or task
  • Graduations of quality – to provide an evaluative range or scale.

Developing rubrics helps clarify the expectations that people have for different aspects of task or behavior performance by providing detailed descriptions of collectively agreed upon expectations. Well designed rubrics used for assessment increase the reliability and validity and ensure that the information gathered can be used to help people assess their management efforts, and improve them. It is different than a simple checklist since it also describes the gradations of quality (levels) for each dimension of the performance to be evaluated. It is important to involve program participants in developing rubrics and helping define and agree on the criteria and assessment. This broad involvement increase the likelihood that different evaluation efforts can provide comparable ratings. As a result, the assessments based on these rubrics will be more effective and efficient.

Involving people in developing rubrics involves a number of steps.

  • Defining the task to be rated. This can include consideration of both outputs (things completed) and processes (level of participation, required behaviors, etc.).
  • Defining criteria to be assessed. These should represent the component elements that are required for successful achievement of the task to be rated. The different parts of the task need to be set out simply and completely. This can often be started by asking participants to brainstorm what they might expect to see where/when the task is done very well … and very poorly.
  • Developing scales which describe how well any given task or process has been performed. This usually involves selecting 3-5 levels. Scales can use different language such as:
    –   Advanced, intermediate, fair, poor
    –   Exemplary, proficient, marginal, unacceptable
    –   Well-developed, developing, under-developed

More resources on rubrics can be found from the main Learning for Sustainability rubrics page –Rubrics – as a learning and assessment tool for project planning and evaluation. More material can also  be found from the indicators page. Other related PM&E resources can be found from the Theory of Change and Logic Modelling pages.

Share