Programme design

How to formulate strong outputs

Outputs are arguably not the most important level of the results chain. It is outcomes that should be the focus of a good plan. Ultimately, that´s what counts.

However, outputs still matter.

Just to be clear: Simply put, outputs refer to changes in skills or abilities, or the availability of new products and services. In plain lingo: Outputs are what we plan to do to achieve a result.

Ok, let’s be a bit more precise: Outputs usually refers to a group of people or an organization that has improved capacities, abilities, skills, knowledge, systems, policies or if something is built, created or repaired as a direct result of support provided. That’s a definition we can work with.

Language is important

When describing what you do, focus on the change, not the process. Language matters.

Don’t say: ‘Local organisations will support young women and men in becoming community leaders.’ This emphasises the process rather than the change.

Instead, emphasis what will be different as a result of your support. Say: ‘Young women and men have the skills and motivation to be community leaders’

Make it time-bound

An organization’s support is typically not open-ended. You usually expect to wrap up what you do at a certain time. Emphasise that your activities are carried out within a certain time frame. So it’s always helpful to include in the formulation for example ‘By January 2019, …’. 

A formula for describing what you do

To ensure that you accurately describe what you do, use the following formula:

Thomas Winderl, 08.09.2020

Impact measurement

Measuring impact of social innovation

Impact. You know the feeling don’t you – you’ve been working on a brilliant initiative, and then someone turns up and asks you “So – what impact are you making”?

It’s a fair question – indeed, it’s a question we should be asking ourselves. If we are not making a difference, we are wasting our time, aren’t we? In this blog post we look at what we mean by impact, and how we can measure it.

Impact – what is it?

First, let’s be clear what we mean by ‘impact’  –  it’s the powerful and long-lasting effect that something we’re doing has on a situation or on people. So, for example, if we run a programme encouraging women to become entrepreneurs, hopefully some of them will set up successful businesses –  that’ll be our impact.

The question now is:  how do we know if we are achieving what we want?  Well, we are going to have to measure the impact… And this is not always as difficult as it first sounds. Most things can be measured . Check out my brief blog post on We can measure (nearly) anything.

Temple in Bhutan with three monks in the background

The 5 steps to measure impact of social innovation

To measure impact, we are going to need data –  in other words, factsSo, to go back to our example, we could look at what percentage of trainees who set up a business, how the business grows, and how much income they generate.

Importantly, we need TWO measures  – we need the data before our intervention and the data following our intervention –  hopefully, when we compare the two, we will see that there’s been an improvement.  If not, we have wasted out time!

So how can we get our data? This is where the five steps to measure impact of social innovation come in.

Decision tree that describes five steps how to measure impact of social innovation.

Step 1: Dig deep into what we already know

You would be surprised; we often have more data available than we realise. In step 1, we carefully think through what data we are already collecting. For example, we are likely to have administrative or financial data. If we do training, we will also have data on trainees.


We are training young people on digital innovation

Think about it!  We already know how many attend our training and for how long;  we just have to look at the attendance sheet the participants sign every day during our training. This is valuable data

We also know who the people are that attend our training, including their sex and age, just by looking at the trainee profiles filled out by applicants.

We can find out the extent to which participants have acquired new skills by comparing their skills before and after the training

And we may even have an idea of how many of our participants have managed to set up businesses or find employment –  just check the call log and notes from meetings with previous participants who have come back to us to ask for further support.

Step 2: Do some research about others

If we don’t have data, it’s possible that somebody else has useful data. So, before thinking about getting new data ourselves, let’s see if it is already being measured in some way by someone else. That is our step 2.

This will require some research – at least a careful search on the internet and government and non-governmental websites.

This is our chance to be a ‘clever detective’: Consider using big data, national statistical databases and reports, international data repositories, national or international surveys and indices.


A youth organisation is sending a caravan across Morocco to promote the Sustainable Development Goals. We want to measure if people become more aware of the SDGs.  OK, so here’s an easy way: use Google Trends to track how many people search for the term “SDG” over time.


A Ministry runs an awareness campaign to stop sexual harassment. After some research, we find out that HarassMap, a volunteer-based initiative in Egypt, already records reported incidences of sexual harassment. This data can be analysed and used to track high-level impact of the awareness campaign over time.

Step 3: Measure impact it yourself

If we do not have data ourselves – and nobody else has it either – it’s time to put our thinking hats on : We need to measure it ourselves.

Just about every imaginable phenomenon leaves some evidence that it occurred. Let us look for any trails it leaves, consider tagging it or carry out experiments:

a. Can we observe it directly?

For example, we have done some training for unemployed people in Somalia, and this requires us to measure to what extent trainees are successful in producing mobile apps. To do that, we regularly count the number of published apps with at least four stars on Google Play with the keyword “Somalia”.

b. If we can’t observe it directly, can we tag it to start tracking?

For example: 500 young people in Iraq are trained in entrepreneurship and design thinking. Six months after finishing the training, we offer 50 randomly selected trainees an additional day of tutoring with a group of established businesswomen and men. During this tutoring, we ask them to fill out a one-page questionnaire that helps us measure their success and ability to obtain additional loans.

c. If all else fails, can we create an experiment to create the conditions to observe it?

For example: A network of youth organisations support young people in political participation. To measure success, we compare how many young people under 21 are elected to councils in three supported cities compared to three similar councils in the same region that were not supported

To collect data ourselves, we have a full toolbox from Social Sciences available to us. I wrote about this toolbox in another blog post.

Step 4: Use sampling to measure impact

This is my favourite part: Step 4 is about sample surveys to collect data.

Sampling is like magic: We observe just some of the things we are interested in, and from this we can learn something about all things.

Sample surveys can be used for people, things and documents.

Sampling can be done for people (through interviews), things (through observations) and documents (through desk reviews)

And sample surveys can be small, simple and cheap, including only a single observationor one or two questions.


An organisation in Somalia provides 2,000 young people with new skills in digital innovation. We want to know the impact.

Rather than interview all of them, we randomly select 100 young people at the training graduation and ask them to leave an email address. Six months later, we ask them if they have found employment, in what area and how much they earn now.

Then, we ‘extrapolate’.  That is to say: if we find that, for example, 60 of our 100 people we track have found work in the ICT sector and are earning an average of, say, $400 a month, then we can assume the same pattern will be found in all 2,000 trainees  – i.e. that 60% of the 2,000 trainees are working, and that our training has created a total additional monthly income of $480,000.  Multiply that over twelve months, and that’s well over $5 million in a year!  That’s a BIG impact!

Step 5: Estimations for measuring impact

Ok. If nothing has worked so far, we have one last option up our sleeves: estimations. No, I didn’t say ‘make things up’ (that wouldn’t be right) –  but we can get indications of impact by estimating data based on what we know already. Not convinced? Let’s look at an example:


We want to know how many people our Sustainable Development Goals campaign reaches. We want to know how many young people we have reached in a year through our the campaign.

Counting every single participant at every of our 200 events per year would be a nightmare. However, we can take a photo of 15 randomly selected events. We roughly count the number of people on the picture and take an average. Let’s say 50 people on average show up.

Nothing works? Rethink what you do!

Ok. If nothing has worked so far, we may have a problem.

If we cannot measure it at all, we may need to think again about what we are trying to achieve!

Indicators Planning for M&E

Indicators: choose the right angle

Once we have agreed on a clever, limited set of indicators, all seems good. However, we are not yet there. For most indicators, we can choose a specific ‘angle’.

What do I mean?

For example, we are considering the “Number of female members of Parliament” as an indicator. Which may be fine. However, we may prefer the “Proportion of parliamentarians that are women” instead (and often is preferable). On the other hand, we may want to compare it with the % of voters that are women. Or we want to capture the change over time – or even the funds invested in political leadership by women.

Soooo many options. But we need to get the angle right.

Communication Reporting

Use change language in reports

An engaging report uses ‘change language’ instead of action language.


Ok, it’s not as complicated as these big words suggest. Language matters. Let’s look at it closer:


Action language reflects the completion of a series of activities. An example of action language is:

“XYZ supported the peer education of 150,000 girls on HIV prevention”

Avoid using action language.


Instead of action language, use change language. Change language reports on the results of an action instead of the action itself.

An example of action language is:

“150,000 girls know how to protect themselves against HIV infection with the support of XYZ”


Using change language rather than action language means that we avoid action-focused phrases like:

“Organization XYZ supported …”

“Organization XYZ worked with …”

“Organization XYZ enhances…”

“Organization XYZ promoted…”

“Organization XYZ focused on…”

“Organization XYZ sought to…”

“Organization XYZ attempted to…”

Instead, describe the change that these activities lead to.


Avoid reporting on internal matters which are not directly related to delivering results. This includes e.g. capacity-building of staff, issues to do with project management, implementation, staffing, etc.

Two examples of  what not to include:

In 2013, 15 XYZ and project staff were trained in human rights

Management of the project in the second half of 2013 has improved significantly, as has communication between counterparts, project staff and organization XYZ.

Communication Reporting

Five tips for good report writing

Do you hate report writing? Fed up with writing long, detailed reports that nobody seems to read?

But the truth is: report writing IS important. And it can be exciting :-). It can showcase what we have achieved. It is important to show those that have provided the money – taxpayers, communities, donors, charities, etc. – that we used it effectively. Further, report writing is crucial to mobilise more money. And: it helps those making decisions to improve what we do.

Our tips for good reporting

Good reporting isn’t that hard. My five tips for writing stellar reports are: 1. start with what is important, 2. use simple language, 3. use change language, 4. back it up with evidence, and 5. visualise data.

1. Start with what is important

Given the usual short attention span of most people, start with the most important things. I repeat, because that’s really important: Start with what really counts in a report. And in Results Based Management, what really counts are results. So: start with outcomes, outputs and – if feasible – impact.

In the report, develop a story line that describes changes on the outcome level. Follow up with description of the outputs delivered to make these changes happen. If useful, include a limited number of key activities carried out to deliver the outputs. Finally, include the funds used to carry out these activities (=inputs). Here is a simplified example of a story line from outcomes to outputs and inputs.

Example of a story line for good report writing

2. Use simple language in reports

When we write reports, we want to be understood by the reader. That is why we should make it as easy as possible for the reader to understand what we try to communicate (without oversimplifications).

3. Use change language in reports

This is very, very important in report writing. It looks like a small thing, but the way we formulate results achieved has a tremendous effect on the quality of the report. So instead of saying “Our organisation supported the peer education of 20.000 unemployed women and men over 50 years”, say “20.000 unemployed women and men now know how to start a small business”. Check out our blog on How to use change language.

4. Back reports up with evidence

In our reports, any claim of success or progress, attribution or contribution must be backed up by evidence. It’s obvious, isn’t it? What is evidence? Evidence is anything presented to objectively support a claim. It is not the untested views of individuals or groups, sometimes inspired by ideological standpoints, prejudices, or speculative conjecture.

5. Visualise data

If you are like me (and many are), a report using only words and numbers is not easy to read at all.

The inclusion of simple visuals is an effective way to write reports and to communicate. For example, data can include charts, maps, graphs, time series, interactive visualisation, infographics, matrices, hierarchies, pictures, micro-content for social media, videos, comics, etc.

Data visualisation is a broad field. Here are few pointers to some outstanding resources on data visualisation:

Want to know more?

Enrol in our free crash course on Practical Results Based Management at the Results Lab.
Free email course on Practical Results Based Management
Theory of Change

7 principles of a Theory of Change

A good theory of change (ToC) is based on seven principles: 1. Repeat doing it until you get it right, 2. links are as important as expected changes, 3. risk and assumptions are key, 4. it must be scalable, 5. it is not a results chain, 6. organisations often already have ToCs in place which can’t be ignored, and 7. it has to be simple.

Planning Results Based Management

A Theory of Change

Probably as an inevitable result of the buzz in the past few years, many Theories of Change are often useless.

One reason for that is – is my feeling – that they are typically not developed in a logical sequence: without a thorough problem analysis, with a possible intervention already in mind, just as the opposite side of a result chain, etc.

Here is my suggestion in what sequence we should go about it:

Adaptive programming Innovation

Stop overthinking: Minimum Viable Product


Ok. What are we talking about?

A minimum viable product – MVP in short – is a product with just enough features to gather validated learning about a possible intervention or product. It is deliberately imperfect; any additional work on it beyond what was required to start learning is waste, no matter how important it might have seemed at the time.

The idea of using minimum viable products is taken from product development ideas of the lean start-up movement in the private sector. The most well know writer about lean startup movement is probably Eric Ries with his book on ‘The lean startup: how constant innovation creates radically successful businesses’ (1).

The underlying reason to develop an MVP is to empirically test assumptions and hypotheses about what works and what does not. Using MVPs is a structured way to check that you have an efficient and appropriate solution or approach before rolling it out or making a big investment in it.

Minimum viable product can range in complexity from extremely simple ‘smoke tests’ (little more than an announcement or advertisement for a service or product) to actual early prototypes complete with problems and missing features.

Source: Stop overthinking…Just stop!, Matias Honorato


Minimum viable products are the fastest way to get through the build-measure-learn feedback loop of adaptive programming (see section on adaptive programming). The goal of MVPs is to begin the process of learning, not end it. Unlike ‘prototypes’ or ‘proofs-of-concept’, MVPs are designed not just to answer product design or technical questions, but to test out theories of change and fundamental development hypotheses.

Some forms of minimum viable products to consider are:

  • Video MVP: simple, short video that demonstrates how a programme, project, policy, product or service is meant to work
  • ‘Concierge’ MVP: testing a programme, project, policy, product or service is meant to work with a single or very few clients
  • ‘Wizard of Oz’ MVP: clients believe to be interacting with an actual service or product, which is in fact only simulated by humans


In Papua New Guinea, UNDP tested whether a low-cost tool can help the government address corruption and mismanagement, which gobbled up 40% of the country’s annual budget. In 2014, UNDP partnered with local Telecom companies to design a simple SMS-based minimum viable product. The MVP was subsequently tested with 1,200 staff in the Department of Finance, and updated based on user-feedback. Within four months, it led to over 250 cases of alleged corruption under investigation. Based on the uptake of the first version, the service was rolled out to six new departments and 25,000 government officials countrywide in 2015. (2)

For examples from a business start-up perspective see e.g. The Ultimate guide to minimum viable products by Scale up my Business.


This blog post is based on: The lean startup: how constant innovation creates radically successful businesses, Eric Ries, 2011, pp.92-113; Minimum viable product (MVP), Wikipedia; Prototype testing plan, Development Impact & You (DIY)


(1) The lean startup: how constant innovation creates radically successful businesses, Eric Ries (2011)

(2) 6 ways to innovate for 2030, Benjamin Kumpf, 19-04-2016; Papua New Guinea: Phones against corruption, UNDP

Innovation Planning

Innovative programme design

It has always been clear to me: Monitoring and Evaluation depends to a large extent on how development or government programmes are planned and designed. That is why good M&E is strongly linked with good planning and design. In a way, programme design is a natural extension of our skill set in M&E.

That is why this blog post looks closer at the rapidly evolving landscape of options for innovative programme design.


In my view, we have recently seen such a push for innovation in programme design for three reasons:

  • Disillusion with linear models: Linear planning tools such as logical frameworks and linear theories of change are ineffective in a more complex, complicated or chaotic context. If you are not yet convinced, look at Ben Ramalingam’s comprehensive critique of current planning models (1).
  • Complexity and wicked problems: Genuine progress toward sustainable development is increasingly complex: solutions are not simple or obvious, those who would benefit are the ones who most lack power, those who can make a difference are disengaged, and political barriers are too often overlooked. Look for example at the Manifesto ‘Doing Development Differently’ manifesto from 2014. That is why programme design must increasingly cope with ‘wicked’ problems: problems that are difficult to define clearly, are deeply interconnected, and driven by many factors and unforeseen events.
  • Data revolution: The world has been – and still is – undergoing a data revolution with far-reaching consequences for programme design: We now live in a world where 90 percent of the data out there today has been created in the last two years alone. Every minute, more than 270,000 tweets get published worldwide, Google receives no less than 4 million search queries, and over 200 million emails are sent.(2)


Despite this push, development organizations and governments have mostly kept more innovative programme designs contained in ‘innovation labs’ and small pilot projects. There are significant challenges to innovative programme design. Three reasons stand out in my view:

  • Mind set: Many of these innovations require ‘unlearning’ of the traditional, linear design approach based on a chain of results.
  • Rules and regulations: Many of these innovations may be difficult to carry out within the current rules and regulations for programme design and implementation.
  • Donor requirements: Testing these innovations may require flexibility by the donor.


In my view, there are eleven concrete options for innovative programme design in complex, complicated or chaotic settings to consider for governments and development organisations:


(1) Aid on the edge of chaos: rethinking international cooperation in a complex world, Ramalingam 2013

(2) Can big data help us make emergency services better? David Svab/Brett Romero, 20/04/2016

Data collection Measurement

The Age of Data

Data, information and knowledge

Professional Monitoring and Evaluation is based on hard facts: data, information, knowledge and understanding. Let us take a closer look at these concepts:

Hierarchy of data (know what) that can lead to information (know what), knowledge (know how) and understanding (know why)


As you will know, data is a collection of objective facts, such as numbers, words, images, measurements, observations or even just descriptions of things. In other words: Data is chunks of raw facts about the state of the world.

For example: crime rates, unemployment statistics, but also handwritten notes of interviews, or a recorded description of an observation.

Data is raw, unorganized and lacks context. To be useful, it needs to be turned into information.


Information is data that has meaning and purpose. It can help us understand what is happening.

For example: the noises that I hear are data. The meaning of these noises – for example a running car engine – is information.


Information can serve to create knowledge. Knowledge can instruct how to do something.

And finally, knowledge can be turned into understanding that explains why it is happening.

Data collection

Our toolbox for primary data collection

To collect primary data (data that we need to collect first ourselves), we can rely on a rather sophisticated tool box– largely from social sciences – that has been developed over decades.

There are tools for quantitative and qualitative data collection. Here is a list of some of the important tools available to us:


Measuring = reducing uncertainty

A lot of what we do in Monitoring and Evaluation is related to measuring. And that’s the problem: Many of us think of measuring as something precise. But that’s not necessarily the case in M&E. What we need to be is roughly right. Those of us in Monitoring and Evaluation should carefully considering to this advice from John Maynard Keynes, a British economist.

“It is better to be roughly right than precisely wrong.”

Measurements are not always exact – but can be more or less precise. But after any measurement, we know more after measuring than we did before. Even an imprecise measurement will reduce our uncertainty to some degree. And depending on what we need, such an imprecise measure may just be good enough for a specific purpose.

For example: If you want to get yourselves now shoes, it is sufficient to know your shoe size. You don’t need to precisely measure your feet.

In Monitoring and Evaluation, this is how we need to look at measuring: as a set of observations that reduce uncertainty where the result is expressed as a number. In most cases in Monitoring and Evaluation, we are not interested in a scientifically precise measurement. Instead, we aim for information that are sufficiently accurate to tell us what is going on – and to give us an indication how it changes over time.

Bertram Russel, the British mathematician and philosopher, has formulated that well (apart from the gender insensitive ‘man’):

Monitoring and Evaluation

Monitoring? Evaluation? Isn’t it the same?

Don’t be fooled. 

The fact that the terms Monitoring and Evaluation often goes together and is called “M&E” is somewhat misleading.

When governments or development organisations use the term “Monitoring & Evaluation”, however, they mean something very specific: Monitoring & Evaluation is about collecting and analysing data and reporting on findings on how well a programme, a policy, a service or an organisation is performing, and making a judgement about its value.

It is true that Monitoring & Evaluation frequently share similar tools and methods. While they are interrelated, Monitoring & Evaluation are clearly separate activities.

So no, it’s NOT the same – it’s actually quite different. Monitoring and Evaluations are usually carried out by different people and differ in how often they are carried out.

Ok. So what is monitoring?

In a nutshell: Monitoring is like the dashboard of your car when you are driving: It tells you have fast you go, how much petrol you have left, or maybe if one of the car’s door has been left open.

In governments and development organisations, monitoring is he regular and systematic collectionanalysisreporting and use of information about programmes, policies or services.

Monitoring is concerned with the performance of a programme, a policy or a service. Unlike an evaluation, it is typically conducted internally. That means monitoring is typically carried out by staff that works inside an organisation. And unlike evaluations, it is a continuous process. That means it is carried out non-stop during – and sometimes after – an activity. Monitoring typically supports the management of programmes, policies or services, and helps to manage its risks.

…and what is evaluation?

Evaluations are like the occasional check-up of your car: Evaluations are a systematic and impartial assessment of expected and achieved accomplishments.

Evaluations take a step back to look – as the term suggests – at the overall value of a programme, a policy or a service. Evaluations are usually conducted externally. That means evaluations are typically carried out by evaluators or specialists with no link to a programme, policy, service or organisation. Having independent, external evaluators should insure a more unbiased judgement. Unlike monitoring, an evaluation is not carried out all the time, but is a one-off activity. Typically, evaluations are carried out during or at the end of an activity.

And evaluations are more systematic than monitoring: Here are some typical questions an evaluation attempts to answer:

  • Is a programme, a policy, a service or an organization relevant? Does it suit the priorities and policies of the target group?
  • Is it effective? Does it achieve results?
  • Is it efficient? Does it achieve results at reasonable costs?
  • Does it have impact? What real difference has a programme, a policy or a service made to for beneficiaries?
  • Is it sustainable? Will positive changes continue once funding is cut?

That is why evaluations tend to be broader in scope then monitoring.


Mini course in Monitoring and Evaluation

Are you new to Monitoring & Evaluation? Or do you want to briefly brush up your knowledge on key concepts of monitoring and evaluating programmes, services or policies?

Why not join my M&E mini course on Udemy? It’s short, to the point and covers the basics. And with a rating of 4.4 stars and 8,000+ subscribers it’s probably not too bad 😉

Check it out at Here is the link to the course:

Mini course in basic concepts of monitoring and evaluation

Want to be a Monitoring & Evaluation professional?

Do you already work in M&E? Or do you consider building a career in M&E? My advice: Do it! To avoid any surprises, here is a very, very, very brief intro into Monitoring & Evaluation (or M&E) as a profession.

You may already know that, but: Monitoring and Evaluation is about measuring and tracking results of government and development programmes and judging their value.

Monitoring and Evaluation has for decades been a standard management tool for the United Nations and Non-Governmental Organizations. It is also growing in popularity across the globe, where more and more governments are setting up their own national Monitoring and Evaluation systems.

Working in Monitoring and Evaluation is pretty cool.  It is an exciting and growing area in governance and international development. It combines

  • clear, logical and creative thinking,
  • different techniques from social sciences,
  • innovative communication techniques and
  • cutting-edge technology for data collection and analysis.

Why M&E?

There are three reasons why Monitoring and Evaluation can be useful for government and development programmes:

First, it can help us understand if we are achieving the results we want:

  • Do programmes and policies lead to the results that we planned?
  • Is it worth spending all that money compared to the results that were achieved?
  • And: Do programmes and policies make a positive difference in the lives of people?

Second, Monitoring and Evaluation can help us to improve – to do things better. It should provide us with evidence to swiftly adjust or correct programmes and policies

And third: Monitoring and Evaluation should help us learn more about what works and – equally important – what does not.

Clearly, Monitoring and Evaluation is not a silver bullet for ineffective governments and development organisations. But it can help us to know if we achieve the results we want, how to improve what we do, and what works and what does not work.

Innovation Monitoring and Evaluation

M&E needs to change. We need to change.

Monitoring and Evaluation is an exciting area of work because it is currently undergoing rapid changes.

Countries and organizations are increasingly taking innovative approaches to monitor and evaluate the performance of programmes, policies, services or organizations.

There are several reasons why this is happening.


First, citizens, parliaments and donors are rightly demanding to know what results are being achieved with their money. Today, it has become unacceptable to simply report back on the number of people that were trained. Instead, Monitoring and Evaluation is challenged to provide meaningful information about real changes as the result of a policy, a programme or a service.


Second, we live in an increasingly complex and fast-changing environment. We need Monitoring and Evaluation that can capture complex cause and effects and provide quick, real-time feedback to tweak or change a programme, a policy or a service.


Third, technology keeps accelerating at an increasing speed. This opens a wide range of opportunities for innovations in Monitoring and Evaluation. We are just only beginning to explore the opportunities of ‘big data’, the ubiquitous use of smart phones, the use of satellite images and drones.

Planning for M&E Results Based Management

The result chain: a beginner’s guide

Monitoring and Evaluation is about measuring and tracking results. That is why it is important to understand what results are, and how to distinguish between different levels of the results chain.

In general, a “result” is something that happens or exists because of something else that has happened:

  • the results of a football game
  • the final value of a mathematical calculation, or
  • the outcomes of an election.

In development and governance, we use a more nuanced understanding of different types of ‘result’: the so-called result chain.

The result chain distinguished between five logically connected elements:

  • inputs
  • activities
  • outputs
  • outcomes, and
  • impact


It’s not complicated – and we use this logic all the time. Let’s take a simple example:

I want to do something about living a healthier life. This is the desired impact. To do that, I want to reduce my weight. This is my planned outcome. To reduce my overall weight, I plan on eating ore vegetables and exercise regularly. These are my planned outputs. Eating healthier requires more conscious shopping habits. More exercise requires me to go running or join a gym. These are some of my planned activities. These activities require some extra time and money. These are the inputs.


Let us look more in detail at a typical results chain of a development or government programme, policy or service. This time we start at the lower end of the result chain and work our way up:


Any programme, policy or service requires resources of some kind. We call these resources inputs.

For example: to put together this course, it took me time to record this video; you need an internet connection and a computer to watch it.

Typically, inputs refer to money, staff time, materials and equipment, transport costs, infrastructure, etc.


These inputs are required to carry out a number of activities.

For example: you watch videos of this lesson; you do a quiz; you do some additional reading; you watch the next videos; etc.

So: Activities are actions taken that use inputs to produce higher level results – ‘outputs.’

Typical activities in governance and development are the drafting of a policy document for a ministry, the organization of a media outreach campaign, the training of midwives in a new approach, etc.


The next level, outputs are typically the result of several completed activities.

For example: after reading this blog post, you have the knowledge to critically review and design your own results chain.

In development and governance, an output is delivered if a group of people or an organization has improved capacitiesabilitiesskillssystemspolicies or if something is created, built or repaired.

Outputs are the direct result of a set of activities and delivered during the implementation of a programme, a policy or a service. Outputs are different from the next level of results – outcomes – because you largely have control over delivering outputs.

That means that if we – and our partners – have the resources and the time to deliver a certain output, we can largely guarantee that the output will be delivered.

That also means, that in turn, we are fully responsible for delivering an output.

Typical outputs are a draft policy document for a ministry, a media outreach campaign, improved skills for, etc.


Now, this is very different from the next level of results: An outcome is something we hope to achieve as a result of what we do.

In development and governance, an outcome implies that institutions or people do

  1. a) something differently (behavioural change) or
  2. b) something better (change in performance)

The difference of an outcome is that – unlike outputs which we largely control – we can only influence the achievement of an outcome, but it ultimately goes beyond our control.

Typical outcomes are a parliament passing a new law, people changing their behaviour because of a media campaign, midwifes that apply new skills in their daily routine, etc.

Outcomes are typically achieved at the end or even after a programme, policy or service has been implemented.


Finally, outcomes should contribute to a broader impact.

An impact is the long-term effect of programmes, policies or services. It implies a detectable improvement in people’s lives.

Impact typically relates to positive economic, social, cultural, institutional, environmental, technologicalchanges in the lives of a targeted population. An impact is often related to broad national goals or international aspirations like the Sustainable Development Goals. Impact is typically much broader than a programme, policy or service. And: an impact is typically detectable only after a few months or even years.

What about ‘results’?

So: Which of these elements of a result chain do we considered ‘results’?

We usually define ‘results’ – in the context of governance and development – as the top three elements of the result chain: outputs, outcomes and impact. And most importantly, results are not inputs or activities.

Example: The Land of Smokistan

Let us look at a few simplified but typical examples from the world of development and governance:

The country of Smokistan wants to reduce smoking.

The desired impact is that less people die of smoke-related illnesses within five years.

The planned outcome is that fewer people smoke fewer cigarettes.

Smokistan plans to achieve that by increasing taxes on cigarettes and making smoking more difficult in public spaces – these are the outputs.

This requires several activities: for example drafting and passing a new anti-smoking law or funding and implementing smoke-free public zones, etc.

These activities require additional time and money – the inputs.

Example: Domestic violence

Let’s take another example:

The same country aims at fewer people experiencing violence in a domestic setting within three years – the desired impact.

The planned outcome is that women and men openly discuss domestic violence.

The country aims at achieving that by drastically increasing public discussions on domestic violence in social media and traditional media outlets – the outputs.

This requires setting up – for example – a social media unit in a ministry and training influential bloggers and journalists in properly reporting on domestic violence – the activities.

Again, these activities require time and money – the inputs.

Did you like this post? If you are interested, check out my mini-course on Monitoring and Evaluation on Udemy, a learning platform:


Indicators in the age of Open Data

In the age of open data, indicators must be clear, credible and proportional.

It is a game changer for M&E: Communication skills using indicators suddenly become a core qualification.

More and more organizations make their data on aid, development and humanitarian flows accessible online as a result of the International Aid Transparency Initiative (IATI). This year, large donors like the UNDP even add project indicators, baselines, targets and status data available online. This is a very good reason to look hard at the indicators we are using. Anyone on this planet with internet access can look at our data. The times when M&E was seen as a highly technical speciality are gone. Development lingo, awkward formulations and technical expressions do not work anymore in the age of open data. M&E specialists now – also – need to be excellent communicators.

That is why indicators in the age of open data need to be – apart from technically sound:


The user of our indicator data  – clients, journalists, academics, donors – data need to understand what we say. We should use clearer language, avoid technical expressions or abbreviations, and add background information so that the indicator can be understood if looked at in isolation.


There will be a much tougher scrutiny of our indicator data. While indicators in the past, let’s be honest, were looked at by very few people directly involved in an intervention, a much larger group of people – some of them very critical of a programme – will look very carefully at our data. And since the net never forgets, our data will be stored basically forever. This requires us to be much more certain that we can back up all data with rock-solid, credible evidence.


When linking up indicator data with the equally publicly accessible budget and expenditure reports, anyone with a calculator can do rough value-for-money calculations: This project raised 5000 people out of poverty, but it did cost 10.000 USD per person. That intervention had 700 schools built for 25.000 USD each. These calculations are valuable, but we need to ensure that the indicators we pick properly capture the key outputs or outcomes of an intervention. In short: Indicators and data need to be proportional to the funds used.


Seven steps to measure results

Infograph with seven steps how to measure results
Source: Thomas Winderl


What is the information we would like to know in the future? How often do we want to know it?

For example: Our goal is to help 10.000 women increase their income. What we would like to know is a) if this is the case, b) by how much their income has increased, c) how sustainable this increase in income really is after our support ends.


Ensure that the measurement will be used; if yes, clarify for what it will be used; estimate how much the information is worth (e.g. by thinking about how much you would pay to get that information).

For example: Data in changes in household income and sustainability will help us to adjust the project and tell us if we are overall successful or not. And: Since we absolutely need to know if what we do works, we would be ready to pay up to 10% of the project funds for this information.



Check if the result is really formulated on right level (e.g. that it is not an activity or an output that can be delivered with nearly 100% certainty); check if it’s clear who is supposed to change behaviour/performance; check if the result statement is time-bound;

For example: An increase in income is clearly not an output, since it is beyond our control. And: we expect that 10,000 women from low-income households in 2 provinces increase their income. We expect that to happen within 3 years.


‘Decompose’ any uncertain variable into constituent parts to identify directly observable things that are easier to measure.

For example: Income from non-formal economic activities of 10,000 women before taxes.


Have others already measured it (or parts of it)?

For example: The Bureau of Statistics collects this data, but only every 5 years – not frequent enough for our purpose.



Measure “just enough” (“optimal ignorance”); keep the information value in mind (see step 2).

For example: We would like to know the changes in income with a degree of uncertainty of ca. 15% (meaning + or – 15%.


Can the change be observed? If observing it in total is not feasibly, can you observe a sample of it?

For example: Yes, increased income can be observed by frequently visiting the 10,000 women. However, that might not be feasible. That leaves us with the option of measuring a small, randomly selected sample of these women.

Does it leave a trail? Think like a forensic investigator; if there is no direct trail, does it lead to consequences which leave a trail (think of a proxy measurement)?

For example: Increased income will leave a trail. Expenditure might raise with increasing income; local tax payments might increase; living conditions might raise; saving rates might increase. However, we conclude that none of these measurements provide us with the sufficiently accurate measurement.

If not, can it be ‘tagged’? Tagging involves adding some sort of ‘tracker’ so that it does leave a trail and/or can be observed.

For example: We can provide 5% randomly selected women with an inexpensive mobile phone, a SIM card and some credit after they complete our training. We can then call them or send them an SMS with 3 simple questions on a regular basis to collect the information we want.

If not, can it be forced to occur (e.g. through an experiment)?

If not, the result is probably not sufficiently well defined. Repeat steps 1 to 7.


We can (nearly) measure anything

“We had some great results -but you can’t measure them!”

“Not everything that counts can be counted!”

“You can’t measure everything!”

These are some of the critical arguments we often hear in Monitoring and Evaluation.

The reason for this confusion is the common misconception that we in Monitoring and Evaluation always aim for precise, scientific measurements.

However, our work is about measuring to reduce uncertainty – and not necessary about precise numbers.

Let’s take an example: Can we measure happiness? Yes, we can. It is done all the time. Not precisely, but we can measure approximate levels of happiness and how they change over time. For example, we can ask people regularly how happy they feel on a scale from 1 to 10. Or we can define a set of criteria that we know from research make people happy – a warm, dry place to sleep, food on the table, a sense of self-control over their lives, and so on. Or we can use face recognition software to track over time how often people smile per day.

In fact, the so-called World Happiness Report regularly ranks countries according to their level of happiness. And the Himalaya kingdom of Bhutan sets policies based on a Gross National Happiness index.

In a nutshell: If we can observe a thing in any way at all, we can also measure it.

And we know: What is getting measured, gets done.