This post explores how evaluations benefits from being focused on a small set of key questions. These are often referred to as key evaluation questions (KEQs). They should be seen as high level questions that assess progress towards the main specified outcomes, and will be answered by combining data from several sources and methods.
Evaluations provide an opportunity for your (or your clients’s) intervention’s overall progress to be considered, including focused consideration of specific aspects of the initiative. A well-developed theory of change (TOC) and accompanying logic models provide an outline that helps to develop measures of success that traces the intervention’s development and impact over time. These measures, in turn, need to be focused with appropriate KEQs that are driven by funders, project participants and other key stakeholders.
The five criteria to evaluate interventions (relevance, effectiveness, efficiency, impact, and sustainability outlined in the OEDC/DAC evaluation guidelines provide a good starting framework for a range of initiatives in development areas (health, natural resource management, community resilience, etc.) . Evaluation questions also to be considered in a complex intervention such as this should address context, reasons for adaption and emergence of activities and outcomes, different perspectives and inter-relationships that impact project success, sustainability and transferability.
A useful starting set of key evaluation questions to guide initial analysis are:
Is the research delivering on outputs and outcomes as planned? (efficiency and effectiveness)
Have applied activities and their delivery methods been effective? Are there aspects that could have been done differently? (process effectiveness)
Is the wider project story being told? What range of outcomes (intended and unintended) has the research project contributed to – taking account of each of social, economic, environmental and cultural considerations (impact)
How has the project influenced the stakeholder community, and what capacities has it built? (impact)
Is the project being delivered on budget? What aspects of the participatory elements of the project could be done differently next time to cut costs while still delivering achievements? (efficiency)
Is the project impacting positively on key groups and issues that have been identified as important in project design? (impact)
Is there evidence that the initiative is likely to grow – scaling up and out – beyond the project life? (sustainability)
To what extent did the initiative deliver against the needs of key stakeholders? Were the size, scale and approach taken for each need appropriate? (impact & efficiency)
These questions need to be clarified by key project stakeholders. Some may be amended, others dropped, and new questions can be included. Developing these questions also provides an opportunity to revise the underlying theory of change and any accompanying logic or outcome models. In this way KEQs can be seen to help intervention planning and evaluation.
This post looks more specifically at outcomes, and how they can be developed and written. It highlights the benefits of focusing on outcomes for project planning, implementation and evaluation. It also provides some tips and ideas for involving program staff and stakeholders in developing and working with outcome statements.
Until recently, the performance of many public sector programs has been judged largely on inputs, activities and outputs. Over recent years this approach has been increasingly questioned as being too concerned with efficiency considerations, without a corresponding focus on what benefits are actually arising from program funding and activities. Increasingly the trend is moving towards a focus on the specification and achievement of outcomes, revealing more about how effective programs are in achieving real development changes on-the-ground.
Outputs are the goods and services that result from activities. Outcomes are the constructive impacts on people or environments. In the past planning and evaluation has tended to focus on program outputs, or how we keep ourselves busy – the ‘what we do’ and ‘who we do it with’. This enables us to tell our partners, funders and stakeholders about what the program does, the services it provides, how it is unique, and who it serves. We can describe and count our activities and the different goods and services we produce. Now, however, we are being asked what difference it makes! This is a question about outcomes (see figure). Outcomes are the changes, benefits, learning or other effects that happen as a result of what the program offers or provides. Outcomes are usually specified in terms of either: i) social and organizational capacities (social outcomes – e.g. learning, understanding, perceptions, attitudes and behaviors), or ii) state conditions (the bio-physical, ecological, social or economic changes in a system).
While most people intuitively appreciate this distinction between outputs and outcomes, experience in results-oriented training sessions suggests that for many program staff, turning that appreciation into practice takes time. As the Keystone (2009) guide points out it takes most people quite a lot of conscious practice before they start thinking in terms of outcomes, rather than outputs or needs or activities. An outcome statement describes a result – a change that has taken place. It is not a needs statement, or an activity that is still in progress. Outputs comprise the products and activities that you do, while outcomes are what we see as a result of our outputs.) One simple test is to ask two questions of each statement: i) is it written as an outcome? and ii) does it describe changes that we can plausibly enable or facilitate in people, groups, institutions or environments?
Outcomes may be specified in different ways. Often a distinction is made between short-term, intermediate and long-term, or just intermediate and long-term. Short-term outcomes can be seen as the immediate difference that your program makes in the wider environment. A long-term outcome often has a number of short-term and intermediate outcomes that together contribute to the ultimate achievement of the long-term outcome. Collectively these outcomes should contribute explicitly to the wider vision underpinning program development. An intermediate outcome is a specified intermediate state that contributes to the desired long-term outcome – a step along the way. Intermediate outcomes are especially useful when time lags in measurable state outcomes are significant or limit timely response.
The program outcomes and intermediate outcomes should be structured in a logical hierarchy reflecting how each leads to another and/or contributes to the long-term community outcome(s). A useful way of doing this is to take each outcome and ask the question, ‘If we achieve this, what will it lead to and how will it contribute to the long-term outcome?’ Look for gaps – starting from the highest level outcome and working down the outcomes model. A test is being able to read an outcome and say, ‘Yes, this will likely be achieved if all of these initial (contributing intermediate) outcomes (and corresponding outputs) are achieved.’ The answers to these questions will enable you to draft a succinct statement of each outcome.
Each outcome statement should therefore define what will change as a result of an intervention and by how much (or, at the very least, in what direction the change will occur). This then allows the means of performance measurement to be defined. The more clearly an outcome statement specifies a desired change, the easier it is to define an appropriate indicator or indicator set.
It is not always easy to identify outcomes, and harder still to clarify them, but there are a number of key questions that can help. For example, begin by asking what is/will be different as a result of the initiative? For whom? What will be changed/improved? What do/will beneficiaries and other stakeholders say is the value of the program? For an existing program, look at the major activities. For each activity, ask yourself, ‘Why are we doing that?’ Usually, the answer to the ‘Why?’ question is an outcome. Most importantly, seek ideas and input from others. Their perspectives will help provide a broader understanding of the program and its benefits. This activity will also help build consensus among key program stakeholders.
When writing outcomes be sure to describe the desired change. Keep your outcomes SMART: Specific, Measurable, Achievable, Relevant, Time-limited. Say ‘what’, not ‘how’ – Establishing the means and plausibility of the ‘how’ is a later step. Consider whether outcomes are likely to be achieved in the program time frame.
Table 1 Examples of outcome statement structure from a range of sectors
Over x years
Over x years
Public awareness of an issue
Over x years
This post provides a short introduction to the language and concepts of outcomes. Links to a wealth of information, tips and guides from around the world can be found from the LfS Managing for outcomes: using logic modeling webpage.
Often rallying participants around the development of a visual logic model is a good place to begin the development of a theory of change. The use of key headings and post-it notes makes it easy to provide a structure to help people develop some early models that contribute directly to their program planning, and build confidence and capacity in the use of TOC outcomes-based approaches.
Logic models are narrative or graphical depictions of processes in real life that communicate the underlying assumptions upon which an activity is expected to lead to a specific result. There are four components commonly included in logic models (Fig. 2). These are the four primary components of the project or program itself – inputs, activities, outputs and outcomes. There are also four supporting activities which encourage participants to think more carefully about the underlying theory of change that they are planning to use. These supporting activities are: i) an outline of the current situation and desired vision; ii) stakeholder analysis, to identify which stakeholders should be involved in model development; iii) the scoping and planning exercise that underpins any model development; ensuring that underpinning assumptions are documented; and iv) noting internal and external factors – including related activities – that may influence outcomes.
There is no single or correct way to draw a logic model. It can be drawn horizontally (as in Fig. 1) vertically, or even in a more free-form fashion. Ideally, a logic model should be able to be displayed on a single page with sufficient detail that it can be explained fairly easily and understood by other people. Much of the value of a logic model is that it provides a visual expression of our underlying beliefs about why the program is likely to succeed through one step leading to another. Thus, each step between an activity and an output or between an output and an outcome can be thought of as an ‘if this happens … then that is likely to happen’ statement. For large or complex programs, the logic model may be divided into more detailed sections or sub-models. These may be summarized by a less detailed ‘overview’ model, often given on the first page, that shows how the component sub-models fit together into a whole.
As an example, Fig. 2 illustrates the main program logic elements set out in a horizontal fashion. The inputs are the resources used to resource the activities, produce the program outputs, and ultimately contribute towards desired outcomes. Inputs typically include such things as money, staff, and equipment/infrastructure. Inputs are usually measured as counts, such as hours of staff time, dollars spent, etc. Activities are the actual interventions and actions undertaken by program stakeholders, staff and agencies to achieve specified outputs. Activities can range from writing a memo, to holding workshops, to creating infrastructure. Activities are usually measured in terms of number of things done – e.g. x meetings held with communities. Outputs are the tangible results of the major activities in the program (the goods and services produced). They are usually measured by their number – e.g. reports produced, newsletters published, numbers of field days held. Collectively the inputs, activities and outputs define what the program does, and how efficient it is in managing those elements. Outcomes represent the effectiveness of the program – are the desired states of the community, biological system or production sector achieved by the program. Outcomes may be specified in terms of short-term, intermediate and long-term, or just intermediate and long-term. A long-term outcome will usually have a number of intermediate outcomes that together contribute to its ultimate achievement.
The diagram above also shows the supporting information and activities that help the model (and the intended program) to be understood in its wider context. Starting out with outlining a planning and scoping phase helps participants to clearly define the problem or need, and the desired outcome. An ‘issue’ statement should explain briefly the current situation: what needs to change; why is there is a need for intervention; and, what problem/issue does my program aim to solve? This requires that ‘who, what, why, where, when, and how’ are all considered in relation to the problem/issue. Then, the overall purpose of the program needs to be defined. What are you trying to accomplish over the life of the program and beyond? The answer to this question is the solution to the issue statement, and will serve as the program’s vision. The program vision serves as a reference frame for all elements of the logic model that follow. Involving your key stakeholders (see the accompanying resources on stakeholder mapping and analysis) in the process of developing an outcomes model provides an opportunity to engage them in a discussion about the program and to get their input to the process.
The link between a program’s activities and outputs and its desired outcome is based on the assumptions that explicitly, or implicitly, are built into your program theory. Your program theory (or theory of change) sets out why you believe that the successful delivery of the program’s activities and outputs is expected to lead to the desired change (the predicted outcomes). It is important to document the program rationale – the beliefs about how change occurs in your field, based on research, experience, or best practice. This needs to be followed by identifying the corresponding assumptions that are built into the program rationale and to acknowledge and document where uncertainties exist.
A final discussion can help participants to take account of the risks and opportunities facing the program. These can derive from both internal and external factors. Programs that are operating in complex environments cannot control all the factors that will influence how, when or even if they reach their goals. Therefore it is also important to be aware of similar or related external initiatives that will impact on the final outcomes. This is important in terms of attribution – how to ascertain how much impact can be attributed to your program. It also provides the opportunity to look for other initiatives to link and integrate with, to develop useful synergy and maximize the overall influence of the program. Internal factors might relate, for example, to staff and stakeholder capacities.
Three key reasons for using logic models in program design are that they: i) helps you understand why and how something works; ii) provide a guide for implementing useful monitoring and evaluation systems; and iii) help you tell the story of your program quickly and visually. Logic models are most useful when they are developed at the beginning of a program. In this way they can be used to plan how resources can be coordinated and even inspire particular project strategies. They can also at this stage help set realistic expectations for outcomes, bearing in mind that the ultimate desired end-state outcomes of an initiative can often take many years to emerge. Their initial development helps subsequent evaluation as once a program has been described in terms of a logic model, it is then possible to identify meaningful and easily measurable performance indicators. Finally, the simple, clear graphical representation that a logic model provides helps with program communication, and can serve as the basis for expanding the underlying TOC.
Finally ––Some tips for working with logic models
Start with ensuring a common understanding of the current situation and a shared vision: It’s important to know where you are, and where you are trying to get to. These positions will have often been expressed in already published documents, mission statements, etc. The important thing is to ensure that there is some common understanding around the problem and the desired outcomes among all those that you are trying to work with on your journey.
Involve stakeholders: A strong focus on the process of developing a logic or outcomes model (rather than seeing it as just a task to complete) can increase engagement in the program. Building a logic model provides an opportunity, often rare in the everyday provision of services, to involve stakeholders in a discussion on what it is about the planned initiative that is most meaningful to constituents.
Keep the model simple: Concentrate on the most important activities and outcomes, and cut back on detail. Describe your activities and outcomes in language that is understood by a wide range of stakeholders. This lets your logic model provide a common picture of your project that is easily understood. It’s important to get an overview of the model on one page that can be used as a communication aid, and more detail can be added behind it if necessary.
Minimise the use of arrows: In complex situations there are always many links and potential feedback loops between the boxes on the page. It is often enough to indicate the general movement of time and direction of the model.
Avoid siloed thinking: Don’t just include steps and outcomes that are measurable or which you can absolutely prove you changed (attributable to you) – these may not end up being the most important part of the programme. Similarly don’t force lower steps to only contribute or influence a single higher-level step or outcome. Most elements influence a number of things in the real world.
Work constructively with disagreement: Although it might be difficult, keep key stakeholders involved, including staff, program participants, collaborators, or funders. Take time to explore the reasons for disagreement about what should be captured in the logic model. Look for the assumptions, identify and resolve disagreements, and build consensus.
More information: Often people talk about logic models and theory of change processes interchangeably. Logic models typically connect programmatic activities to client or stakeholder outcomes. But a theory of change goes further, specifying how to create a range of conditions that help programmes deliver on the desired outcomes. These can include setting out the right kinds of partnerships, types of forums, particular kinds of technical assistance, and tools and processes that help people operate more collaboratively and be more results focused.
A rubric is an easily applicable form of assessment. They are most commonly used in education, and offer a process for defining and describing the important components of work being assessed. They can help us assess complex tasks (e.g. essays or projects) or behaviors (e.g. collaboration, team work). Increasingly rubrics are being used to help develop assessments in other areas such as community development and natural resource management.
Although the format of a rubric can vary, they all have two key components:
A list of criteria – or what counts in an activity or task
Graduations of quality – to provide an evaluative range or scale.
Developing rubrics helps clarify the expectations that people have for different aspects of task or behavior performance by providing detailed descriptions of collectively agreed upon expectations. Well designed rubrics used for assessment increase the reliability and validity and ensure that the information gathered can be used to help people assess their management efforts, and improve them. It is different than a simple checklist since it also describes the gradations of quality (levels) for each dimension of the performance to be evaluated. It is important to involve program participants in developing rubrics and helping define and agree on the criteria and assessment. This broad involvement increase the likelihood that different evaluation efforts can provide comparable ratings. As a result, the assessments based on these rubrics will be more effective and efficient.
Involving people in developing rubrics involves a number of steps.
Defining the task to be rated. This can include consideration of both outputs (things completed) and processes (level of participation, required behaviors, etc.).
Defining criteria to be assessed. These should represent the component elements that are required for successful achievement of the task to be rated. The different parts of the task need to be set out simply and completely. This can often be started by asking participants to brainstorm what they might expect to see where/when the task is done very well … and very poorly.
Developing scales which describe how well any given task or process has been performed. This usually involves selecting 3-5 levels. Scales can use different language such as:
– Advanced, intermediate, fair, poor
– Exemplary, proficient, marginal, unacceptable
– Well-developed, developing, under-developed