Social learning – what it looks like

Social learning - 123RF- - 41266812
Social learning emphasizes the importance of taking time to pause and engage in constructive dialogue. [Copyright: rawpixel / 123RF Stock Photo]
Social learning is an approach to working on complex environmental problems, particularly those with high degrees of uncertainty, lots of interested parties and disagreement around causes, effects and even desired outcomes. There is no one definition of social learning, but the many descriptions of it emphasize the importance of dialogue between groups.  These negotiations help people to better understand different points of view, and develop processes for collective action and reflection over time. This post provides a brief introduction to the concept – more information and links to a wealth of online material about it can be found directly from the LfS social learning page. There are also a number of links through this post that will take you directly to pages that explore different aspects of social learning.

Just to begin, social learning is not what many people confuse it as – learning by people ‘out there’ about the important things we ‘in here’ think they should know! That is information dissemination, advice …. or even indoctrination. At times we may feel it necessary to directly tell people ‘how it is’ – but this more linear approach to communication should not be confused with social learning.

The concept of social (or collaborative) learning refers to learning processes among a group of people who seek to improve a common situation and take action collectively. This understanding effectively extends experiential learning into collaborative (or social) learning. By broadening their perspectives and taking collective action – people can become empowered. Empowerment, in this sense, can be seen as enhancing the capacity of individuals or groups to make choices, and to transform those choices into desired actions and outcomes. Achieving such outcomes requires a long-term social process that evolves over time and is woven from a number of different activity strands. These strands can be thought of in different ways; e.g. this site highlights five key strands that underpin social learning:  i) systems thinking; ii) network building; iii) dialogue; iv) knowledge management; and v) reflective practice. Information on the way that these different strands that support social learning can be accessed through the ‘social learning’ index above.

Social Learning, as an approach to complex problem solving, has emerged in recent years alongside other approaches such as adaptive management and systems thinking. (In fact it includes the core essentials of both of these). In a nutshell it is about creating situations where people can learn collectively to improve a situation. The aim of adaptive management is to enable groups and organizations to adapt their practices and learn in a systematic way, often referred to as ‘learning by doing’. The kind of thinking required is essentially systems thinking. This is about having an appreciation of the characteristics of systems, i.e., that each element will affect the operation of the whole, parts of the system are interdependent etc. The focus of systems thinking is therefore on interaction. Furthermore, systems thinking requires a shift of mind, a willingness to look at problems from different perspectives. It looks at underlying systemic structures; and encourages people to look beyond discrete events at underlying patterns of behavior and underpinning mental models. The aim of system thinking based inquiry is to seek leverage, seeing where actions and changes in structures can lead to significant and enduring improvements.

The point, of course is that social learning would not be ‘social’ if it was not about people and their interactions. Because we are dealing with complex issues that arise from settings with many stakeholders with differing views, responsibilities, and knowledge about the system (including science, management agencies and people making decisions on-the-ground), social learning has to be about how to bring people together. Particularly it is about helping people work collaboratively – bridging disciplines, knowledge systems and cultures.  By keeping these concepts in mind we can aim to manage more interactions within participatory and learning-based contexts to help those involved to engage in social learning and develop a shared understanding around goals, actions and indicators.

After action reviews – and how they can be linked with ToCs to support strategic thinking

This post introduces After Action Reviews (AARs), and indicates how theories of change and AARs can be used in tandem to create both the space and guidance for strategic learning, and subsequent adaptation and innovation.

The importance of reflecting on what you are doing, as part of the learning process, has been emphasized by many reviewers. Building the capacity to reflect on action so as to engage in a process of continuous learning is increasingly seen as an important aspect of behavior change, and it is beginning to be used in many models of changing professional practice. However, it is not a conscious behavior for many teams, and effort needs to be put in to provide teams with tools that can support reflection. These tools are usually known under names such as After Action Review (AAR) or Learning Debriefs, and are used by to capture the lessons learned from past successes and failures, with the goal of improving future performance.

In terms of evaluation they are starting to be paired with design processes such as program theory-based approaches. Developing results-based framework diagrams can help with structuring and deepening participants thinking and planning (both in developing action plans and monitoring plans) for a project. Then after these plans have been implemented (action), an AAR can provide people with a platform and space to reflect on what happened and what they are learning about their project.

aar-questions
Four sets of questions that drive After Action Reviews (AARs)

AAR is a form of group reflection; participants review what was intended (activity aims), what actually happened (intended and unintended outcomes), why it happened and what was learned. AARs should contribute to information that not only helps assess an immediate activity, but also helps assess progress towards longer-term term progress set out in the underlying Theory of Change (TOC) and accompanying logic models. AARs can be used in short, frequent group process checks, or more extended, in-depth explorations to assess progress at the wider level. It can be undertaken by a group or an individual as they ask themselves four questions:

  • What was supposed to happen? By asking each participate about their expectations it can sometimes highlight problems in communication as individuals have different expectations.
  • What actually happened? By identifying what went on an accurate picture can be built up.
  • Why was there a difference? This is where participants need to concentrate on the what – and not the who – between expectations and what actually happened.
  • What can we learn from this? What learning points have been identified so that the organisation or individual continues to improve?

AARs can be conducted almost anywhere, and will vary in length. For example, an AAR can be conducted after a one-off event (in 15 minutes or so), or a much more focused meeting could be held to reflect on how planned intermediate and longer-term outcomes are being achieved at an organizational or program level. It is important that the underlying culture of those undertaking an AAR is one of openness and learning. It is not about allocating blame, but ensuring that those involved can move forward (and adapt where necessary). Lessons are not only shared by the individuals involved but can be documented and shared with a wider audience.

Every organization, every partnership or team, and every intervention will likely require different levels of preparation, execution, and review. However, a number of best practices do emerge across the literature:

  • Lessons must first and foremost benefit the team that develops them. The AAR process must start at the beginning of the activity (with clear aims set out in advance through a TOC or outcomes modeling process). AAR Lessons must link explicitly to future actions, that support desired outcomes. And leaders must hold everyone, especially themselves, accountable for learning.
  • Managers and facilitators should phase in an AAR culture. This can begin with facilitating simple AARs around team projects – keeping things simple at first and developing the process slowly—adding knowledge-sharing activities and systems, richer metrics, and other features dictated by the particular initiatives in question.
  • If there are issues with either openness or time, it may be worthwhile to gather individual ideas first – and then facilitate a group discussion.

By creating tight feedback cycles between planning and action, AARs build team and organizational capacity to succeed in a variety of conditions. AARs are not just a source of lessons, but provide a low-technology tool which teams and communities can use on how to draw new lessons from novel and evolving situations for which they did not train—situations they may not even have imagined. In a fast-changing environment, the capacity to learn lessons and adapt can be more valuable than any individual lesson learned. That capacity  -which can be used for adaptive management or for innovation –  is what can be gained by more closely linking outcome planning and learning-based reflective activities such as AARs.

A number of other Learning for Sustainability pages provide additional information on this topic. The page on selecting evaluation questions and types directly builds on this topic and includes links to additional external resources. A number of other pages are introduced through to the introductory page titled Planning, monitoring & evaluation – closing the loop.

Key evaluation questions (KEQs)

This post explores how evaluations benefits from being focused on a small set of key questions. These are often referred to as key evaluation questions (KEQs). They should be seen as high level questions that assess progress towards the main specified outcomes, and will be answered by combining data from several sources and methods. 

Evaluations provide an opportunity for your (or your clients’s) intervention’s overall progress to be considered, including focused consideration of specific aspects of the initiative. A well-developed theory of change (TOC) and accompanying logic models provide an outline that helps to develop measures of success that traces the intervention’s development and impact over time. These measures, in turn, need to be focused with appropriate KEQs that are driven by funders, project participants and other key stakeholders.

Criteria for developing Key Evaluation Questions (KEQs)The five criteria to evaluate interventions (relevance, effectiveness, efficiency, impact, and sustainability outlined in the OEDC/DAC evaluation guidelines provide a good starting framework for a range of initiatives in development areas (health, natural resource management, community resilience, etc.) . Evaluation questions also to be considered in a complex intervention such as this should address context, reasons for adaption and emergence of activities and outcomes, different perspectives and inter-relationships that impact project success, sustainability and transferability.

A useful starting set of key evaluation questions to guide initial analysis are:

  • Is the research delivering on outputs and outcomes as planned? (efficiency and effectiveness)
  • Have applied activities and their delivery methods been effective? Are there aspects that could have been done differently? (process effectiveness)
  • Is the wider project story being told? What range of outcomes (intended and unintended) has the research project contributed to – taking account of each of social, economic, environmental and cultural considerations (impact)
  • How has the project influenced the stakeholder community, and what capacities has it built? (impact)
  • Is the project being delivered on budget? What aspects of the participatory elements of the project could be done differently next time to cut costs while still delivering achievements? (efficiency)
  • Is the project impacting positively on key groups and issues that have been identified as important in project design? (impact)
  • Is there evidence that the initiative is likely to grow – scaling up and out – beyond the project life? (sustainability)
  • To what extent did the initiative deliver against the needs of key stakeholders? Were the size, scale and approach taken for each need appropriate? (impact & efficiency)

These questions need to be clarified by key project stakeholders. Some may be amended, others dropped, and new questions can be included. Developing these questions also provides an opportunity to revise the underlying theory of change and any accompanying logic or outcome models. In this way KEQs can be seen to help intervention planning and evaluation.

A number of other Learning for Sustainability pages provide additional information on this topic. The page on selecting evaluation questions and types directly builds on this topic and includes links to additional external resources on KEQs. A number of other related M&E themes are introduced through the introductory page – Planning, monitoring & evaluation – closing the loop.

 

More about outcomes – why they are important … and elusive!

This post looks more specifically at outcomes, and how they can be developed and written. It highlights the benefits of focusing on outcomes for project planning, implementation and evaluation. It also provides some tips and ideas for involving program staff and stakeholders in developing and working with outcome statements.

Until recently, the performance of many public sector programs has been judged largely on inputs, activities and outputs. Over recent years this approach has been increasingly questioned as being too concerned with efficiency considerations, without a corresponding focus on what benefits are actually arising from program funding and activities. Increasingly the trend is moving towards a focus on the specification and achievement of outcomes, revealing more about how effective programs are in achieving real development changes on-the-ground.

Outputs are the goods and services that result from activities. Outcomes are the constructive impacts on people or environments. In the past planning and evaluation has tended to focus on program outputs, or how we keep ourselves busy – the ‘what we do’ and ‘who we do it with’. This enables us to tell our partners, funders and stakeholders about what the program does, the services it provides, how it is unique, and who it serves. We can describe and count our activities and the different goods and services we produce. Now, however, we are being asked what difference it makes! This is a question about outcomes (see figure). Outcomes are the changes, benefits, learning or other effects that happen as a result of what the program offers or provides. Outcomes are usually specified in terms of either: i) social and organizational capacities (social outcomes – e.g. learning, understanding, perceptions, attitudes and behaviors), or ii) state conditions (the bio-physical, ecological, social or economic changes in a system).

outcomes model 2
Logic models highlight the need to consider both program efficiency and effectiveness

 

While most people intuitively appreciate this distinction between outputs and outcomes, experience in results-oriented training sessions suggests that for many program staff, turning that appreciation into practice takes time. As the Keystone (2009) guide points out it takes most people quite a lot of conscious practice before they start thinking in terms of outcomes, rather than outputs or needs or activities. An outcome statement describes a result – a change that has taken place. It is not a needs statement, or an activity that is still in progress. Outputs comprise the products and activities that you do, while outcomes are what we see as a result of our outputs.) One simple test is to ask two questions of each statement: i) is it written as an outcome? and ii) does it describe changes that we can plausibly enable or facilitate in people, groups, institutions or environments?

Outcomes may be specified in different ways. Often a distinction is made between short-term, intermediate and long-term, or just intermediate and long-term. Short-term outcomes can be seen as the immediate difference that your program makes in the wider environment. A long-term outcome often has a number of short-term and intermediate outcomes that together contribute to the ultimate achievement of the long-term outcome. Collectively these outcomes should contribute explicitly to the wider vision underpinning program development. An intermediate outcome is a specified intermediate state that contributes to the desired long-term outcome – a step along the way. Intermediate outcomes are especially useful when time lags in measurable state outcomes are significant or limit timely response.

The program outcomes and intermediate outcomes should be structured in a logical hierarchy reflecting how each leads to another and/or contributes to the long-term community outcome(s). A useful way of doing this is to take each outcome and ask the question, ‘If we achieve this, what will it lead to and how will it contribute to the long-term outcome?’ Look for gaps – starting from the highest level outcome and working down the outcomes model. A test is being able to read an outcome and say, ‘Yes, this will likely be achieved if all of these initial (contributing intermediate) outcomes (and corresponding outputs) are achieved.’ The answers to these questions will enable you to draft a succinct statement of each outcome.

Each outcome statement should therefore define what will change as a result of an intervention and by how much (or, at the very least, in what direction the change will occur). This then allows the means of performance measurement to be defined. The more clearly an outcome statement specifies a desired change, the easier it is to define an appropriate indicator or indicator set.

It is not always easy to identify outcomes, and harder still to clarify them, but there are a number of key questions that can help. For example, begin by asking what is/will be different as a result of the initiative? For whom? What will be changed/improved? What do/will beneficiaries and other stakeholders say is the value of the program? For an existing program, look at the major activities. For each activity, ask yourself, ‘Why are we doing that?’ Usually, the answer to the ‘Why?’ question is an outcome. Most importantly, seek ideas and input from others. Their perspectives will help provide a broader understanding of the program and its benefits. This activity will also help build consensus among key program stakeholders.

When writing outcomes be sure to describe the desired change. Keep your outcomes SMART: Specific, Measurable, Achievable, Relevant, Time-limited. Say ‘what’, not ‘how’ – Establishing the means and plausibility of the ‘how’ is a later step. Consider whether outcomes are likely to be achieved in the program time frame.

Table 1 Examples of outcome statement structure from a range of sectors

Who/what Change/desired effect In what By when
Agricultural production Increase $ value Over x years
Biodiversity (species) Increase Trend Over x years
Public awareness of an issue Increase Extent Over x years

This post provides a short introduction to the language and concepts of outcomes. Links to a wealth of information, tips and guides from around the world can be found from the LfS Managing for outcomes: using logic modeling webpage.

 

Diagramming a theory of change

The previous post looked at the benefits of using a Theory of Change (TOC) to better understand your program or project. This post will look at how to use post-it notes and an expanded logic model framework to involve stakeholders in beginning to develop this bigger picture.

hm-log
Involving participants in articulating their projects through a simple logic model using post-its provides a good starting point

Often rallying participants around the development of a visual logic model is a good place to begin the development of a theory of change. The use of key headings and post-it notes makes it easy to provide a structure to help people develop some early models that contribute directly to their program planning, and build confidence and capacity in the use of TOC outcomes-based approaches.

Logic models are narrative or graphical depictions of processes in real life that communicate the underlying assumptions upon which an activity is expected to lead to a specific result.  There are four components commonly included in logic models (Fig. 2). These are the four primary components of the project or program itself – inputs, activities, outputs and outcomes. There are also four supporting activities which encourage participants to think more carefully about the underlying theory of change that they are planning to use. These supporting activities are: i) an outline of the current situation and desired vision; ii) stakeholder analysis, to identify which stakeholders should be involved in model development; iii) the scoping and planning exercise that underpins any model development; ensuring that underpinning assumptions are documented; and iv) noting internal and external factors – including related activities – that may influence outcomes.

Fig. 1. How the eight essential components of a logic or outcomes model (colored boxes) fit together
Fig. 2. How the eight essential components of a TOC logic or outcomes model (colored boxes) fit together

There is no single or correct way to draw a logic model. It can be drawn horizontally (as in Fig. 1) vertically, or even in a more free-form fashion. Ideally, a logic model should be able to be displayed on a single page with sufficient detail that it can be explained fairly easily and understood by other people. Much of the value of a logic model is that it provides a visual expression of our underlying beliefs about why the program is likely to succeed through one step leading to another. Thus, each step between an activity and an output or between an output and an outcome can be thought of as an ‘if this happens … then that is likely to happen’ statement. For large or complex programs, the logic model may be divided into more detailed sections or sub-models. These may be summarized by a less detailed ‘overview’ model, often given on the first page, that shows how the component sub-models fit together into a whole.

As an example, Fig. 2 illustrates the main program logic elements set out in a horizontal fashion. The inputs are the resources used to resource the activities, produce the program outputs, and ultimately contribute towards desired outcomes. Inputs typically include such things as money, staff, and equipment/infrastructure. Inputs are usually measured as counts, such as hours of staff time, dollars spent, etc. Activities are the actual interventions and actions undertaken by program stakeholders, staff and agencies to achieve specified outputs. Activities can range from writing a memo, to holding workshops, to creating infrastructure. Activities are usually measured in terms of number of things done – e.g. x meetings held with communities. Outputs are the tangible results of the major activities in the program (the goods and services produced). They are usually measured by their number – e.g. reports produced, newsletters published, numbers of field days held. Collectively the inputs, activities and outputs define what the program does, and how efficient it is in managing those elements. Outcomes represent the effectiveness of the program – are the desired states of the community, biological system or production sector achieved by the program.  Outcomes may be specified in terms of short-term, intermediate and long-term, or just intermediate and long-term. A long-term outcome will usually have a number of intermediate outcomes that together contribute to its ultimate achievement.

The diagram above also shows the supporting information and activities that help the model (and the intended program) to be understood in its wider context. Starting out with outlining a planning and scoping phase helps participants to clearly define the problem or need, and the desired outcome. An ‘issue’ statement should explain briefly the current situation: what needs to change; why is there is a need for intervention; and, what problem/issue does my program aim to solve? This requires that ‘who, what, why, where, when, and how’ are all considered in relation to the problem/issue. Then, the overall purpose of the program needs to be defined. What are you trying to accomplish over the life of the program and beyond? The answer to this question is the solution to the issue statement, and will serve as the program’s vision. The program vision serves as a reference frame for all elements of the logic model that follow. Involving your key stakeholders (see the accompanying resources on stakeholder mapping and analysis) in the process of developing an outcomes model provides an opportunity to engage them in a discussion about the program and to get their input to the process.

The link between a program’s activities and outputs and its desired outcome is based on the assumptions that explicitly, or implicitly, are built into your program theory. Your program theory (or theory of change) sets out why you believe that the successful delivery of the program’s activities and outputs is expected to lead to the desired change (the predicted outcomes). It is important to document the program rationale – the beliefs about how change occurs in your field, based on research, experience, or best practice. This needs to be followed by identifying the corresponding assumptions that are built into the program rationale and to acknowledge and document where uncertainties exist.

A final discussion can help participants to take account of the risks and opportunities facing the program. These can derive from both internal and external factors. Programs that are operating in complex environments cannot control all the factors that will influence how, when or even if they reach their goals.  Therefore it is also important to be aware of similar or related external initiatives that will impact on the final outcomes. This is important in terms of attribution – how to ascertain how much impact can be attributed to your program. It also provides the opportunity to look for other initiatives to link and integrate with, to develop useful synergy and maximize the overall influence of the program.  Internal factors might relate, for example, to staff and stakeholder capacities.

Three key reasons for using logic models in program design are that they: i) helps you understand why and how something works; ii) provide a guide for implementing useful monitoring and evaluation systems; and iii) help you tell the story of your program quickly and visually. Logic models are most useful when they are developed at the beginning of a program.  In this way they can be used to plan how resources can be coordinated and even inspire particular  project strategies.  They can also at this stage help set  realistic expectations for outcomes, bearing in mind that the ultimate desired end-state outcomes of an initiative can often take many years to emerge.  Their initial development helps subsequent evaluation as once a program has been described in terms of a logic model, it is then possible to identify meaningful and easily measurable performance indicators. Finally, the simple, clear graphical representation that a logic model provides helps with program communication, and can serve as the basis for expanding the underlying TOC.

Finally ––Some tips for working with logic models

  • Start with ensuring a common understanding of the current situation and a shared vision:   It’s important to know where you are, and where you are trying to get to. These positions will have often been expressed in already published documents, mission statements, etc. The important thing is to ensure that there is some common understanding around the problem and the desired outcomes among all those that you are trying to work with on your journey.
  • Involve stakeholders:   A strong focus on the process of developing a logic or outcomes model (rather than seeing it as just a task to complete) can increase engagement in the program. Building a logic model provides an opportunity, often rare in the everyday provision of services, to involve stakeholders in a discussion on what it is about the planned initiative that is most meaningful to constituents.
  • Keep the model simple:  Concentrate on the most important activities and outcomes, and cut back on detail. Describe your activities and outcomes in language that is understood by a wide range of stakeholders.  This lets your logic model provide a common picture of your project that is easily understood. It’s important to get an  overview of the model on one page that can be used as a communication aid, and more detail can be added behind it if necessary.
  • Minimise the use of arrows:   In complex situations there are always many links and potential feedback loops between the boxes on the page. It is often enough to indicate the general movement of time and direction of the model.
  • Avoid siloed thinking:   Don’t just include steps and outcomes that are measurable or which you can absolutely prove you changed (attributable to you) – these may not end up being the most important part of the programme. Similarly don’t force lower steps to only contribute or influence a single higher-level step or outcome. Most elements influence a number of things in the real world.
  • Work constructively with disagreement:  Although it might be difficult, keep key stakeholders involved, including staff, program participants, collaborators, or funders. Take time to explore the reasons for disagreement about what should be captured in the logic model. Look for the assumptions, identify and resolve disagreements, and build consensus.

More information: Often people talk about logic models and theory of change processes interchangeably. Logic models typically connect programmatic activities to client or stakeholder outcomes. But a theory of change goes further, specifying how to create a range of conditions that help programmes deliver on the desired outcomes. These can include setting out the right kinds of partnerships, types of forums, particular kinds of technical assistance, and tools and processes that help people operate more collaboratively and be more results focused.

Using a theory of change (TOC) to better understand your program

This post provides a short introduction to the language and concepts of Theory of Change or program theory. It looks at how the use of these outcomes-based approaches helps those involved with  program learning, planning and evaluation. Subsequent outcomes-based posts look more specifically at developing logic models and working with outcomes.

Community-based change initiatives often have ambitious goals, and so planning specific on-the-ground strategies to those goals is difficult. Likewise, the task of planning and carrying out evaluation research that can inform practice and surface broader lessons for the field in general is a challenge. A Theory of Change approach provides a framework which encourages program staff and stakeholders to develop comprehensive descriptions and illustrations of how and why a desired change is expected to happen in a particular context. It is outcomes-based, and helps those involved to clearly define long-term goals and then map backwards to identify the necessary preconditions that will be required for success.

toc-elements-finalTheories of change are vital to program success for a number of reasons. Programs need to be grounded in good theory. By developing a theory of change based on good theory, managers can be better assured that their programs are delivering the right activities for the desired outcomes. And by creating a theory of change programs are easier to sustain, bring to scale, and evaluate, since each step – from the ideas behind it, to the outcomes it hopes to provide, to the resources needed – are clearly defined within the theory. Often people talk about logic models and theory of change processes interchangeably, Logic models connect programmatic activities and outputs to client or stakeholder outcomes. But a theory of change goes further, specifying how to create a range of conditions that help programs deliver on the desired outcomes. These can include setting out the right kinds of partnerships, types of forums, particular kinds of technical assistance, and tools and processes that help people operate more collaboratively and be more results focused.

The importance of the concept was well illustrated in a 1995 paper – Nothing as Practical as Good Theory: Exploring Theory-Based Evaluation. In that paper, Carol Weiss, hypothesized that a key reason complex programs are so difficult to evaluate is that the assumptions that inspire them are poorly articulated. She argued that stakeholders of complex community initiatives typically are unclear about how the change process will unfold and therefore place little attention to the early and mid-term changes that need to happen in order for a longer term goal to be reached. The lack of clarity about the ‘mini-steps’ that must be taken to reach a long term outcome not only makes the task of evaluating a complex initiative challenging, but reduces the likelihood that all of the important factors related to the long term goal will be addressed.

Weiss popularized the term ‘Theory of Change’ as a way to describe the set of assumptions that explain both the mini-steps that lead to the long term goal of interest and the connections between program activities and outcomes that occur at each step of the way. She challenged designers of complex community-based initiatives to be specific about the theories of change guiding their work and suggested that doing so would improve their overall evaluation plans and would strengthen their ability to claim credit for outcomes that were predicted in their theory. Over subsequent years a number of evaluations have been developed around this approach, fueling more interest in the field about its value.

A theory of change is usually presented in a visual diagram (or logic model) that allows the reader to see the big picture quickly. It does not usually provide a specific implementation plan. The purpose of the process is to allow people to think about what must be changed before doing it.

Theory of change is both a process and a product (Vogel 2012).

At its simplest, theory of change is a dialogue-based process intended to generate a ‘description of a sequence of events that is expected to lead to a particular desired outcome.’ This description is usually captured in a diagram (or logic model) and narrative to provide a guiding framework of the change model showing how and why the desired goals can be reached by the project team and stakeholders. Acknowledging ToC as a process reminds us that a ToC inquiry is an ongoing process of analysis and reflection. It is not a one-off exercise to design (or evaluate) an initiative, but implies an ongoing learning and adaptive management cycle.

In brief, a theory of change starts by identifying a clear ultimate goal and working backwards to establish preconditions for reaching that goal. At each step any assumptions are examined. The next step is to identify indicators. Only when these steps have been completed are the activities or interventions identified. Finally a narrative is drafted to explain the theory of change in everyday language. As Vogel points out, developing a theory of change requires discussion between the different stakeholders groups of the following elements (in order):

  • the context for the initiative, including social, political and environmental conditions, the current state of the problem the project is seeking to influence and other actors able to influence change;
  • the long-term outcomes that the initiative seeks to support and for whose ultimate benefit;
  • the broad sequence of events anticipated (or required) to lead to the desired long-term outcome;
  • the assumptions about how these changes might happen, and about contextual drivers that may affect whether the activities and outputs are appropriate for influencing the desired changes in this context;
  • a diagram (logic model) and narrative summary that represents the sequence and captures the discussion.

The main benefit of theory of change comes from making different views and assumptions about the change process explicit, especially seemingly obvious ones. A good theory of change can specify how to create a range of conditions that help programs deliver on the desired outcomes. These can include setting out the right kinds of partnerships, types of forums, particular kinds of technical assistance, and tools and processes that help people operate more collaboratively and be more results focused. The purpose of doing so is to help program staff and stakeholders to check that programs are appropriate, debate them and enrich them to strengthen project design and implementation. For this reason, theory of change as a process emphasizes the importance of dialogue with stakeholders, acknowledging multiple viewpoints and recognition of power relations, as well as political, social and environmental realities in the context.

Subsequent outcomes-based posts look more specifically at developing logic models and working with outcomesA range of links to online material can be found from the Theory of Change page and the related Managing for outcomes: using logic modeling page.

[Note: An initial version of this post was first posted on the Learning for Sustainability sparksforchange blog in February 2013.]

Ensuring effective teams

Well-functioning teams require a common vision, well-established rules for engagement, and good interpersonal trust (Photo: The patriot jet team - Flickr - Ian Abbott)
Well-functioning teams require a common vision, well-established rules for engagement, and good interpersonal trust (Photo: The patriot jet team – Flickr – Ian Abbott)

Teams are an important element of many organizational initiatives. This post looks at a three phase approach to setting up effective teams. It outlines a number of key aspects that underpin success in each phase, and provides some key tips for effective teamwork.

The use of learning and team approaches within organizations is well accepted as an integral component of many such initiatives. There are a variety of factors that drive this acceptance. Predominantly they stem from the reality that many activities are just too large for individuals to handle alone. Another is that teams are more effective at addressing complex issues, and learn more rapidly than individuals. Moreover there is ample evidence that harnessing the potential power of a team can have a dramatic effect on an organizations’ ability to simultaneously meet targets, innovate,  and improve employee job satisfaction and engagement.

When a team is functioning efficiently (be it a work team, sports team, or community group) the group dynamics and sense of belonging and acceptance can bring out the best in people. Groups can work together to enhance understanding, creativity and problem solving. Most of us have enjoyed being part of an effective group or team. However, while teams may be a necessary part of successful organizational change their presence certainly doesn’t guarantee success. As most of us can also testify, teams can equally provide inefficient and/or frustrating environments in which to operate.

In order for team initiatives to be successful, there must be a unified effort by company leadership, adequate direction and support for the team initiative itself, and ongoing measurement and adjustment of progress towards the desired change. A team based approach to supporting a sustainability initiative can be usefully thought of as comprising 3-phases. The starting phase is about establishment, then there is a focus on team operation, and the final phase emphasizes evaluation and adjustment. The three phases are not necessarily linear. Each phase is a work in progress, with overlap into each of the other phases.

Getting started

Management can support the process by recognizing and agreeing the need for change, and helping align the right people to the team. in wider contexts this phase draws heavily on stakeholder mapping and analysis techniques. The team will also need resources in terms of time/facilities for meetings, administration support, costs for research and information gathering and access to organizational decision-making. There are a number of aspects to ensuring the right balance of skills in the team including:

  • Identifying people and selecting those that are willing to participate, rather than calling for willing volunteers.
  • Looking to select both representatives of the key areas of operation in the organisation, and those with good networking skills to feed information from the group to the rest of the organisation and vice versa.
  • Limiting the size of the team (5-12) unless it is highly structured and has clearly identified individual functions.

Team operation

It is essential to convert the group of assembled participants into a team with a common vision, well-established norms of behaviour and a good level of interpersonal trust. So some form of self-led or externally facilitated team-building can be useful at the start. There are several practical aspects to getting the job done including:

  • Developing roles, particularly for facilitation, chairing, administration and resource provision; and determining a method of rotating these if necessary.
  • Undertaking basic research, including a literature review, to become informed.
  • Developing procedures for diagnosing, analysing, and resolving team work problems and conflicts.
  • In addition it is useful for teams to understand the kind of process in which they are involved and be able to look for ways to move through the stages of group development.

Evaluation and adjustment

Monitoring and evaluation are vital if organisations are to judge whether change efforts have succeeded or failed. Conventionally, it involves measuring performance against pre-set indicators – often with the help of outside experts. Often too, this is done at the end of the project cycle. However, monitoring and evaluating in this way does not help improve ongoing projects, nor can participants learn from ‘surprises’.

Alternative approaches to monitoring and evaluation have emerged because of a growing recognition of the limitations of this approach. These are usually more participatory and focus also on the process of reaching the final results, rather than just assessing whether the group reached defined objectives. This approach encourages monitoring of intermediate indicators of progress, and therefore can serve to guide and motivate the team as it proceeds. It also facilitates an understanding of the link between team process and results. Evaluating the process in this way enables determination of issues such as:

  • How well the team are able to adapt the approach and goals to their particular context.
  • Whether others in the company participate and have a role in shaping the process and design of the project.
  • Whether there has been a positive move towards desired outcomes.

The participatory nature of these reflections encourages the use of monitoring and evaluation as a social learning tool and allows the perspectives of different team members to be articulated. It also provides information to feed into project design, enabling the team to rethink and adapt goals and methods during the project according to emerging issues.

Easy tips for effective teamwork

A few key elements are worth highlighting because they contribute so strongly towards the environment that the team building approach is designed to support.

  • Create clear goals – members need to understand the goals, believe they are important, expect to accomplish these and be able to identify when they have done so.
  • Encourage teams to go for small wins – building effective teams takes time, and teams should aim for small victories before the big ones. Short term goals build cohesiveness and confidence.
  • Build mutual trust and a sense of belonging – it is important that team members are kept informed and a culture of openness is created where people supported to discuss ideas and problems.
  • Provide the necessary external support – if success is dependent on resources then the organization needs to make sure they are available.

This post supports the related Team building, CoPs and learning groups page.  Other Learning for Sustainability site pages providing links to related resources include Managing participation and engagement, Building networks and Reflective practice.

Participatory action research provides for multiple benefits

par
The multiple linked facets of participatory action research.

Over recent years we have begun to see the increased use of collaborative and multi-stakeholder processes in a range of sustainability, natural resource and environmental management areas and sectors. Participatory Action Research is emerging as a useful approach to improving the way we learn about and improve the way we manage these processes. This blog provides a brief introduction to action research and how it is can be used, and points to a range of resources that provide more specific information around its use in practice.

Participatory Action Research (PAR) is one of a family of research methodologies (action research, action learning, etc.) which aim to pursue action and research outcomes at the same time. With this emphasis on actively supporting action it differs from more mainstream research methodologies which place more importance on looking in from outside an intervention as a means to understand social and organizational arrangements. In action research the focus is action to improve a situation, and the research is the conscious effort, as part of the process, to develop public knowledge that adds to theories of action that inform similar collaborative processes. In summary, PAR encourages a simultaneous focus on four basic themes:

  • collaboration through participation
  • development of knowledge
  • social change
  • empowerment of participants

The process that the researcher uses to guide those involved can be seen as iterative learning cycles consisting of phases of planning, acting, observing and reflecting. Fundamental, then, to action research is the concept of “learning by doing”. It recognizes that people learn through the active adaptation of their existing knowledge in response to their experiences with other people and their environment – social learning.

The underlying assumption of this approach is that effective social change depends on the commitment and understanding of those involved in the change process. In other words, if people work together on a common problem clarifying and negotiating ideas and concerns, they will be more likely to change their minds if their joint inquiry indicates such change is necessary. Also, it is suggested that this collaboration can provide people with the time and support necessary to make fundamental changes in their practice which endure beyond the research process.

Against this background, the role of the action researcher is similar to many contemporary practitioners aiming to work in a facilitatory manner to help people in communities and organizations to identify and adopt more sustainable natural resource management practices. These practitioners may come from key stakeholder groups, or they may be research or agency staff. However, their most effective role will be to work with a group, often with multiple interests and perspectives, to develop participatory attitudes, excitement and the desire to work together on jointly negotiated courses of action to bring about improvements and innovation for individual and community benefit. While this role is similar to much of consultancy, action research provides a means by which is more rigorous, and which allows for the development of public knowledge to advance the field.

An action research approach looks to build on good reflective practice. Through their observations and communications in the collaborative process at hand reflective practitioners are continually making informal assessments and judgments about the best way to engage. The difference between this and carrying out these activities as part of an action research inquiry is that during the action research process these practitioners will need to develop and use a range of skills to achieve better evaluation and critical reflection. These skills include things such as more detailed planning, more conscious observation, active listening, improved attention to evaluation and critical reflection. A good understanding of social and behavior change theory is also important.

More resources can be found from the main LfS participatory action research page. More links to related material can also be found from the LfS Theory of Change page, and the Social Learning and the Planning, Monitoring and Evaluation sections.

Using rubrics to assess complex tasks and behaviors

Developing rubrics - involve people in brainstorming criteria to be assessed
Developing rubrics – involve people in brainstorming criteria to be assessed

A rubric is an easily applicable form of assessment. They are most commonly used in education, and offer a process for defining and describing the important components of work being assessed. They can help us assess complex tasks (e.g. essays or projects) or behaviors (e.g. collaboration, team work). Increasingly rubrics are being used to help develop assessments in other areas such as community development and natural resource management.

In a recent paper we describe how rubrics are being used to help collaboration in an integrated research programme – Bridging disciplines, knowledge systems and cultures in pest management.

Although the format of a rubric can vary, they all have two key components:

  • A list of criteria – or what counts in an activity or task
  • Graduations of quality – to provide an evaluative range or scale.

Developing rubrics helps clarify the expectations that people have for different aspects of task or behavior performance by providing detailed descriptions of collectively agreed upon expectations. Well designed rubrics used for assessment increase the reliability and validity and ensure that the information gathered can be used to help people assess their management efforts, and improve them. It is different than a simple checklist since it also describes the gradations of quality (levels) for each dimension of the performance to be evaluated. It is important to involve program participants in developing rubrics and helping define and agree on the criteria and assessment. This broad involvement increase the likelihood that different evaluation efforts can provide comparable ratings. As a result, the assessments based on these rubrics will be more effective and efficient.

Involving people in developing rubrics involves a number of steps.

  • Defining the task to be rated. This can include consideration of both outputs (things completed) and processes (level of participation, required behaviors, etc.).
  • Defining criteria to be assessed. These should represent the component elements that are required for successful achievement of the task to be rated. The different parts of the task need to be set out simply and completely. This can often be started by asking participants to brainstorm what they might expect to see where/when the task is done very well … and very poorly.
  • Developing scales which describe how well any given task or process has been performed. This usually involves selecting 3-5 levels. Scales can use different language such as:
    –   Advanced, intermediate, fair, poor
    –   Exemplary, proficient, marginal, unacceptable
    –   Well-developed, developing, under-developed

More resources on rubrics can be found from the main Learning for Sustainability indicators page. Other related PM&E resources can be found from the Theory of Change and Logic Modelling pages.

Complicated or complex – knowing the difference is important

Birds flocking photo - Photo by Alastair Rae, Wikipedia - https://commons.wikimedia.org/wiki/File:Red-billed_quelea_flocking_at_waterhole.jpg
A flock of birds can be seen as a complex adaptive system (Photo: Red-billed quelea – Wikipedia)

Understanding the difference between complex and complicated systems is becoming important for many aspects of management and policy. Each system is better managed with different leadership, tools and approaches. This post explains the differences, and provides an introduction to management tools and leadership tasks best suited for complexity.

A major breakthrough in understanding how to manage complex multistakeholder situations and programs has come through the field of systems theory. Systems thinking is a way of helping people to see the overall structures, patterns and cycles in systems, rather than seeing only specific events or elements. It allows the identification of solutions that simultaneously address different problem areas and leverage improvement throughout the wider system. It is useful, however, to distinguish between different types of systems.

Simple, complicated or complex

According to a classic report in healthcare by Sholom Glouberman and Brenda Zimmerman – Complicated and Complex systems: What would successful reform of Medicare look like? – systems can be usefully seen as lying along a broad continuum from ‘simple’ to ‘complicated’ to ‘complex’.

Simple problems (such as following a recipe or protocol), may encompass some basic issues of technique and terminology, but once these are mastered, following the ‘recipe’ carries with it a very high assurance of success. Complicated problems (like sending a rocket to the moon), are different. Their complicated nature is often related not only to the scale of the problem, but also to their increased requirements around coordination or specialized expertise. However, rockets are similar to each other and because of this following one success there can be a relatively high degree of certainty of outcome repetition. In contrast complex systems are based on relationships, and their properties of self-organisation, interconnections and evolution. Research into complex systems demonstrates that they cannot be understood solely by simple or complicated approaches to evidence, policy, planning and management. The metaphor that Glouberman and Zimmerman use for complex systems is like raising a child. Formula have limited application. Raising one child provides experience but no assurance of success with the next. Expertise can contribute but is neither necessary nor sufficient to assure success. Every child is unique and must be understood as an individual. A number of interventions can be expected to fail as a matter of course. Uncertainty of the outcome remains. The most useful solutions usually emerge from discussions within the wider family and involve values.

Management implications

These differences have important implications for management. Complicated systems are all fully predictable. These systems are often engineered. We can understand these systems by taking them apart and analyzing the details. From a management point of view we can create these systems by first designing the parts, and then putting them together. However, we cannot build a complex adaptive system (CAS) from scratch and expect it to turn out exactly in the way that we intended. CAS are made up of multiple interconnected elements, and are adaptive in that they have the capacity to change and learn from experience – their history is important. Examples of CAS include ourselves (human beings), a flock of birds (e.g. the picture above), the stock market, ecosystems, immune systems, and any human social-group-based endeavor in a cultural and social system. CAS defy attempts to be created in an engineering effort, and the components in the system co-evolve through their relationships with other components. But we can achieve some understanding by studying how the whole system operates, and we can influence the system by implementing a range of well-thought-out and constructive interventions.

Getting people to work collectively in a coordinated fashion in areas such as poverty alleviation or catchment management is therefore better seen by agencies as a complex, rather than a complicated problem – a fact many managers are happy to acknowledge …. but somehow this acknowledgement often does not translate into different management and leadership practice.

Indicators of progress in managing a complicated system are directly linked through cause and effect. However, indicators of progress in a complex system are better seen as providing a focus around which different stakeholders can come together and discuss, with a view to potentially changing their practices to improve the way the wider system is trending. Understanding this difference has important implications for management action as the table below highlights. In many cases people continue to refer to the system they are trying to influence as if it were complicated rather than complex, perhaps because this is a familiar approach, and there is a sense of security in having a blueprint, and fixed milestones. Furthermore, it is easier to spend time refining a blueprint than it is to accept that there is much uncertainty about what action is required and what outcomes will be achieved. When dealing with a complex system, it is better to conduct a range of smaller innovations and find ways to constantly evaluate and learn from the results and adjust the next steps rather than to work to a set plan.

The art of management and leadership lies in having an array of approaches, and being aware of when to use which approach. Most situations will have simple, complicated and complex system types present, and there may well be multiple systems involved. What is important is distinguishing between system types, and managing each in the appropriate way. Table 1 looks at different leadership roles that can be employed depending on whether one is dealing with a complicated or complex system.

Table 1: Different leadership roles for different systems

Complicated systems

  • Role defining – setting job and task descriptions
  • Decision making – find the ‘best’ choice
  • Tight structuring – use chain of command and prioritise or limit simple actions
  • Knowing – decide and tell others what to do
  • Staying the course – align and maintain focus

Complex adaptive systems

  • Relationship building – working with patterns of interaction
  • Sense making – collective interpretation
  • Loose coupling – support communities of practice and add more degrees of freedom
  • Learning – act/learn/plan at the same time
  • Notice emergent directions – building on what works

As Irene Ng points out in her Complicated vs Complex Outcomes post we have spent the last 100 years doing complicated rather well. “We can pat our backs on putting the man on the moon, doing brain surgeries etc.” We are now moving to a world where learning and innovation are becoming key outcomes, and delivering these requires new skills and capacities. As Irene Ng so eloquently puts it, “We can determine complicated outcomes. We can only enable complex outcomes. We can specify complicated systems. We can only intervene in complex systems.”

Look for leverage points

In complex situations it is useful to move beyond thinking of “a change” that will fix the system, and instead look for a number of “leverage points” that may be adjusted to improve the system. Encouraging the development and implementation of new work practices, for example, may require changes in rules (e.g. laws, protocols and tacit norms), changes in relationships, networks and patterns of behavior (e.g. how conflict is handled, how mistakes are managed, how power is used), and the use of a range of tools (e.g. databases, checklists, guidelines). One-size fits all approaches are unlikely to work in complex adaptive systems. The way solutions are visioned and delivered locally must reflect the values, contexts and cultures of each different community of stakeholders.

The key for those working with these complex adaptive systems is to support ongoing reflection and collaboration among the different people and groups involved. Future visions and common goals need to be openly discussed and negotiated, and tentative pathways forward charted. Over time good practice in these areas will lead to creative collaborative and partnering arrangements that support ongoing innovation and sustainable development.

This post complements the Learning for Sustainability portal Managing complex adaptive systems page, which provides annotated links to a number of key on-line resources in this area. Theories of Change and associated outcomes models are useful tools which help managers to go beyond linear paths of cause and effect, to explore how change happens more broadly and then analyze what that means for the part that their particular agency or program can play.

[An earlier version of this post was originally posted on the Learning for Sustainability sparks for change blog – 3 March 2013]