Selecting evaluation questions and types

Evaluations provide an opportunity for the initiative’s overall progress to be considered, including focused consideration of specific aspects of the initiative. A small number of Key Evaluation Questions (KEQs) help provide this focus.These are not questions that are asked in an interview or questionnaire, but high level research questions that will be answered by combining data from several sources and methods. Using a Theory of change approach with an accompanying project logic model provides an outline that helps develop measures of success that trace the projects development and impact over time. These, in turn, need to be focused with appropriate evaluation questions that are driven by funders, project participants and other key stakeholders. The five criteria to evaluate development interventions (relevance, effectiveness, efficiency, impact, and sustainability) outlined in the OEDC/DAC evaluation guidelines provide a good starting framework.  A useful starting set of key evaluation questions (KEQs) based around these criteria could, for example, include:

  • Is the initiative delivering on outputs and outcomes as planned? (efficiency and effectiveness)
  • Are the (or were the) activities and their delivery methods been effective? Are there aspects that could have been done differently? (process effectiveness)
  • Is the wider project story being told? What range of outcomes (intended and unintended) has the project contributed to – taking account of each of social, economic, environmental and cultural considerations (relevance and impact)
  • How has the initiative influenced the appropriate stakeholder community, and what capacities has it built? (relevance and impact)
  • Has the initiative being delivered on budget? (efficiency)
  • Is the project impacting positively on key groups and issues that have been identified as important in project design – particularly gender, indigenous, youth and environment? (relevance and impact)
  • Is there evidence that the initiative is likely to grow – scaling up and out – beyond the project life? (sustainability)

Evaluation types (and/or methods) are distinguished by the nature of the questions they attempt to answer. It is important to begin an evaluation by being clear on what is wanted from the evaluation. A logic model helps by providing a project outline that helps develop different measures of success that trace the projects development and impact over time.  There are other frameworks which can provide ideas as to the scale and levels of programme intensity to be considered by stakeholders. As these perspectives illustrate, all but very simple projects involve a number of elements, and so many evaluation projects will involve using a number of methods to look at different questions.  The diagram below shows how different evaluation approaches and methods can be used to measure different parts of the overall project or change intitiative.

A project logic model showing how different evaluation types and approaches can be used to measure progress through different stages of implementation
A project logic model showing how different evaluation types and approaches can be used to measure progress through different stages of implementation

Different project stages require different evaluation questions to assess progress:

Needs Assessments: These evaluations verify and map the extent of a problem. They answer questions about the number and characteristics of the individuals or institutions who would constitute the targets of a program to address the problem. Needs assessments can help design a new program or justify continuation of an existing program.
Accountability: Monitoring activities produce regular, ongoing information that answers questions about whether a program or project is being implemented as planned, and identifies problems and facilitates their resolution in a timely way.
Formative evaluations: These evaluations answer questions about how to improve and refine a developing or ongoing program. Formative evaluation usually is undertaken during the initial, or design, phase of a project. However, it also can be helpful for assessing the ongoing activities of an established program. formative evaluation may include process and impact studies. Typically, the findings from formative evaluations are provided as feedback to the programs evaluated.
Process evaluations: Studies of this kind are directed toward understanding and documenting program implementation. They answer questions about the types and quantities of services delivered, the beneficiaries of those services, the resources used to deliver the services, the practical problems encountered, and the ways such problems were resolved. Information form process evaluations is useful for understanding how program impact and outcome were achieved and for program replication. Process evaluations are usually undertaken for projects that are innovative service delivery models, where the technology and the feasibility of implementation are not well known in advance.
Impact or Outcome Evaluations: These evaluations assess the effectiveness of a program in producing change. They focus on the difficult questions of what happened to program participants and how much of a difference the program made. Impact or outcome evaluations are undertaken when it is important to know how well a grantee’s or foundation’s objectives for a program were met, or when a program is an innovative model whose effectiveness has not yet been demonstrated.
Summative Evaluations: Summative evaluations answer questions about program quality and impact for the purposes of accountability and decision making. They are conducted at a project’s or program’s end and usually include a synthesis of process and impact or outcome evaluation components.


A practical guide for engaging stakeholders in developing evaluation questions
In this report Hallie Preskill and Nathalie Jones describes a five-step process for engaging stakeholders in developing evaluation questions, and includes four worksheets and a case example to further facilitate the planning and implementation of your stakeholder engagement process. The report recognises that one way to ensure the relevance and usefulness of an evaluation is to develop a set of evaluation questions that reflect the perspectives, experiences and insights of as many relevant individuals, groups, organizations, and communities as possible. As potential users of the evaluation findings, their input is essential to establishing the focus and direction of the evaluation. By soliciting the opinions, interests, concerns and priorities of stakeholders early in the evaluation process, the results are more likely to address stakeholders’ specific information needs and be useful for a range of purposes, among them to improve program effectiveness, to affect policy decisions and/or to instigate behavioral change.


Developing a monitoring and evaluation plan
This page from the Australian-based Community Sustainability Engagement Evaluation Toolbox provides guidance for developing a monitoring and evaluation plan. The plan outlines the key evaluation questions and the detailed monitoring questions that help answer the evaluation questions. The site was developed and is maintained by Damien Sweeney and Martin Pritchard from Pacific Research & Evalauation Associates (PREA).


Specify the Key Evaluation Questions
This BetterEvaluation post reminds us that having an agreed set of Key Evaluation Questions (KEQs) makes it easier to decide what data to collect, how to analyze it, and how to report it. Try not to have too many Key Evaluation Questions – a maximum of 5-7 main questions will be sufficient. It might also be useful to have some more specific questions under the KEQs.


Key evaluation questions
This presentation illustrates what is meant by Key evaluation questions (KEQs).  It reminds us to involve stakeholders in identifying appropriate and relevant questions. The authors point out not to start with evaluation methods – the aim & purpose of an evaluation should always decide the method, rather than the reverse.


 

Share