Programme design

How to formulate strong outputs

Outputs are arguably not the most important level of the results chain. It is outcomes that should be the focus of a good plan. Ultimately, that´s what counts.

However, outputs still matter.

Just to be clear: Simply put, outputs refer to changes in skills or abilities, or the availability of new products and services. In plain lingo: Outputs are what we plan to do to achieve a result.

Ok, let’s be a bit more precise: Outputs usually refers to a group of people or an organization that has improved capacities, abilities, skills, knowledge, systems, policies or if something is built, created or repaired as a direct result of support provided. That’s a definition we can work with.

Language is important

When describing what you do, focus on the change, not the process. Language matters.

Don’t say: ‘Local organisations will support young women and men in becoming community leaders.’ This emphasises the process rather than the change.

Instead, emphasis what will be different as a result of your support. Say: ‘Young women and men have the skills and motivation to be community leaders’

Make it time-bound

An organization’s support is typically not open-ended. You usually expect to wrap up what you do at a certain time. Emphasise that your activities are carried out within a certain time frame. So it’s always helpful to include in the formulation for example ‘By January 2019, …’. 

A formula for describing what you do

To ensure that you accurately describe what you do, use the following formula:

Thomas Winderl, 08.09.2020

Theory of Change

7 principles of a Theory of Change

A good theory of change (ToC) is based on seven principles: 1. Repeat doing it until you get it right, 2. links are as important as expected changes, 3. risk and assumptions are key, 4. it must be scalable, 5. it is not a results chain, 6. organisations often already have ToCs in place which can’t be ignored, and 7. it has to be simple.

Planning Results Based Management

A Theory of Change

Probably as an inevitable result of the buzz in the past few years, many Theories of Change are often useless.

One reason for that is – is my feeling – that they are typically not developed in a logical sequence: without a thorough problem analysis, with a possible intervention already in mind, just as the opposite side of a result chain, etc.

Here is my suggestion in what sequence we should go about it:

Adaptive programming Innovation

Stop overthinking: Minimum Viable Product


Ok. What are we talking about?

A minimum viable product – MVP in short – is a product with just enough features to gather validated learning about a possible intervention or product. It is deliberately imperfect; any additional work on it beyond what was required to start learning is waste, no matter how important it might have seemed at the time.

The idea of using minimum viable products is taken from product development ideas of the lean start-up movement in the private sector. The most well know writer about lean startup movement is probably Eric Ries with his book on ‘The lean startup: how constant innovation creates radically successful businesses’ (1).

The underlying reason to develop an MVP is to empirically test assumptions and hypotheses about what works and what does not. Using MVPs is a structured way to check that you have an efficient and appropriate solution or approach before rolling it out or making a big investment in it.

Minimum viable product can range in complexity from extremely simple ‘smoke tests’ (little more than an announcement or advertisement for a service or product) to actual early prototypes complete with problems and missing features.

Source: Stop overthinking…Just stop!, Matias Honorato


Minimum viable products are the fastest way to get through the build-measure-learn feedback loop of adaptive programming (see section on adaptive programming). The goal of MVPs is to begin the process of learning, not end it. Unlike ‘prototypes’ or ‘proofs-of-concept’, MVPs are designed not just to answer product design or technical questions, but to test out theories of change and fundamental development hypotheses.

Some forms of minimum viable products to consider are:

  • Video MVP: simple, short video that demonstrates how a programme, project, policy, product or service is meant to work
  • ‘Concierge’ MVP: testing a programme, project, policy, product or service is meant to work with a single or very few clients
  • ‘Wizard of Oz’ MVP: clients believe to be interacting with an actual service or product, which is in fact only simulated by humans


In Papua New Guinea, UNDP tested whether a low-cost tool can help the government address corruption and mismanagement, which gobbled up 40% of the country’s annual budget. In 2014, UNDP partnered with local Telecom companies to design a simple SMS-based minimum viable product. The MVP was subsequently tested with 1,200 staff in the Department of Finance, and updated based on user-feedback. Within four months, it led to over 250 cases of alleged corruption under investigation. Based on the uptake of the first version, the service was rolled out to six new departments and 25,000 government officials countrywide in 2015. (2)

For examples from a business start-up perspective see e.g. The Ultimate guide to minimum viable products by Scale up my Business.


This blog post is based on: The lean startup: how constant innovation creates radically successful businesses, Eric Ries, 2011, pp.92-113; Minimum viable product (MVP), Wikipedia; Prototype testing plan, Development Impact & You (DIY)


(1) The lean startup: how constant innovation creates radically successful businesses, Eric Ries (2011)

(2) 6 ways to innovate for 2030, Benjamin Kumpf, 19-04-2016; Papua New Guinea: Phones against corruption, UNDP

Innovation Planning

Innovative programme design

It has always been clear to me: Monitoring and Evaluation depends to a large extent on how development or government programmes are planned and designed. That is why good M&E is strongly linked with good planning and design. In a way, programme design is a natural extension of our skill set in M&E.

That is why this blog post looks closer at the rapidly evolving landscape of options for innovative programme design.


In my view, we have recently seen such a push for innovation in programme design for three reasons:

  • Disillusion with linear models: Linear planning tools such as logical frameworks and linear theories of change are ineffective in a more complex, complicated or chaotic context. If you are not yet convinced, look at Ben Ramalingam’s comprehensive critique of current planning models (1).
  • Complexity and wicked problems: Genuine progress toward sustainable development is increasingly complex: solutions are not simple or obvious, those who would benefit are the ones who most lack power, those who can make a difference are disengaged, and political barriers are too often overlooked. Look for example at the Manifesto ‘Doing Development Differently’ manifesto from 2014. That is why programme design must increasingly cope with ‘wicked’ problems: problems that are difficult to define clearly, are deeply interconnected, and driven by many factors and unforeseen events.
  • Data revolution: The world has been – and still is – undergoing a data revolution with far-reaching consequences for programme design: We now live in a world where 90 percent of the data out there today has been created in the last two years alone. Every minute, more than 270,000 tweets get published worldwide, Google receives no less than 4 million search queries, and over 200 million emails are sent.(2)


Despite this push, development organizations and governments have mostly kept more innovative programme designs contained in ‘innovation labs’ and small pilot projects. There are significant challenges to innovative programme design. Three reasons stand out in my view:

  • Mind set: Many of these innovations require ‘unlearning’ of the traditional, linear design approach based on a chain of results.
  • Rules and regulations: Many of these innovations may be difficult to carry out within the current rules and regulations for programme design and implementation.
  • Donor requirements: Testing these innovations may require flexibility by the donor.


In my view, there are eleven concrete options for innovative programme design in complex, complicated or chaotic settings to consider for governments and development organisations:


(1) Aid on the edge of chaos: rethinking international cooperation in a complex world, Ramalingam 2013

(2) Can big data help us make emergency services better? David Svab/Brett Romero, 20/04/2016