Advanced Search
Once a programme has identified the specific questions it seeks to answer, the next step is to select the most suitable type of evaluation. There are three types of evaluation design that could be considered: Non-Experimental Evaluations, Experimental/Randomised Control Trials, and Quasi-Experimental Evaluations. There are benefits and disadvantages to each of these approaches, which are detailed in the following table.
Evaluation Design
Considerations
Non-Experimental Evaluations
These evaluation approaches involve comparing the results from…
Clarifying the questions a programme seeks to answer through an evaluation is an important step to selecting an evaluation strategy. Programmes should develop their evaluation questions during the design phase and use their Theory of Change as the basis of these questions. Wherever possible, they should take a participatory approach and collaborate with women and girls, local communities, practitioners, and other relevant stakeholders to develop these questions. This will help ensure the questions reflect the needs and perspectives of diverse stakeholders, and that the programme is accountable…
While developing an evaluation strategy, programmes should consider the type of VAWG programming they are focused on, the maturity of the programme (is it a pilot or has the programme strategy already shown impact?), the questions the programme is trying to answer, the number of programme participants, their ability to mitigate against ethical challenges, the budget and resources available for conducting an evaluation, and the skills needed to conduct the evaluation. To support this decision-making, programmes can use the following decision-matrix created for UN Women Asia-Pacific as a guide:…
Where indicators and measurements of change are being designed for specific populations, it’s important to engage them in the design process, and validate the indicators based on their inputs. Working with women-led organisations or diverse civil society organisations helps promote safe engagement of marginalised groups and women and girls in data collection, and aligns with feminist-informed approaches. Before collecting data, the local availability of care and support services for survivors/victims should also be mapped; if services are not available in the community or cannot be made…
While indicators make it possible to measure a range of changes and programmatic impacts, only those that are needed should be included in monitoring, evaluation and learning (MEL) frameworks. This is because each additional indicator requires additional data collection and this can place a burden on populations. So there needs to be enough indicators to meaningfully and robustly measure change, but not an excessive number. This is in line with ethical guidelines for data collection and supports a survivor-centred approach across all areas of work.
Different data collection methods and…
A good indicator should be clear and concise. It should focus on a single issue that provides relevant information on a situation; particularly information that provides the strategic insight required for effective planning and sound decision-making. Each indicator should include a description of what it measures, the tools needed to gather the data, and the calculations involved in producing the measurement. Ensuring indicators are SMART is a helpful way to guide this process:
Specific: Indicators should be specific and clearly defined, with a clear meaning and scope.
Measurable: Indicat…
Methods of data collection and measuring change should be both quantitative and qualitative to provide a more comprehensive understanding of progress towards reducing VAWG, addressing risk factors for VAWG, and providing high quality services to improve survivor wellbeing.
In terms of primary data collection, quantitative methods of information-gathering and measurement typically include surveys, questionnaires and statistics. Qualitative methods include interviews, focus group discussions and safety audits or observations. Qualitative methods can provide contextual information on risks…
You will need to design a range of indicators to measure change at different levels of your programme in line with your theory of change: for example, indicators to track whether activities and outputs are delivered in the short-term in line with your programme design, and indicators to measure whether the the expected outcomes and longer-term impacts are achieved. The table below gives examples:
Type of Indicator
What the Indicator Measures
Level of Change Example
Indicator Examples
Input indicators
Measure the resources and…
A participatory approach to monitoring, evaluation and reporting will usually make use of several techniques and tools selected and combined to suit the objectives of the work and the resources available. Standard tools for data collection (i.e. key informant interviews, focus group discussions, participant observation, case studies) can be used in a participatory approach if facilitated in a manner which supports deeper participation of stakeholders, rather than extract information.
Some specific tools have been designed to support PMER, often based on visual aids and using locally…
PMER aims at effectively tracking programmatic achievements and challenges, while acknowledging and addressing deep-rooted power imbalances often reproduced by development programming. It’s therefore important to prioritise relationship building, moving at a pace which fosters trust for genuine collaboration and is flexible enough to support meaningful engagement around PMER, and ultimately contribute to movement building.
The following conditions can support efficient implementation for using participatory approaches:
Time: Participatory approaches take time and resources, and the…