Categories
Adaptive programming Innovation

Stop overthinking: Minimum Viable Product

MINIMUM WHAT?!?

Ok. What are we talking about?

A minimum viable product – MVP in short – is a product with just enough features to gather validated learning about a possible intervention or product. It is deliberately imperfect; any additional work on it beyond what was required to start learning is waste, no matter how important it might have seemed at the time.

The idea of using minimum viable products is taken from product development ideas of the lean start-up movement in the private sector. The most well know writer about lean startup movement is probably Eric Ries with his book on ‘The lean startup: how constant innovation creates radically successful businesses’ (1).

The underlying reason to develop an MVP is to empirically test assumptions and hypotheses about what works and what does not. Using MVPs is a structured way to check that you have an efficient and appropriate solution or approach before rolling it out or making a big investment in it.

Minimum viable product can range in complexity from extremely simple ‘smoke tests’ (little more than an announcement or advertisement for a service or product) to actual early prototypes complete with problems and missing features.




Source: Stop overthinking…Just stop!, Matias Honorato

WHEN TO USE IT

Minimum viable products are the fastest way to get through the build-measure-learn feedback loop of adaptive programming (see section on adaptive programming). The goal of MVPs is to begin the process of learning, not end it. Unlike ‘prototypes’ or ‘proofs-of-concept’, MVPs are designed not just to answer product design or technical questions, but to test out theories of change and fundamental development hypotheses.

Some forms of minimum viable products to consider are:

  • Video MVP: simple, short video that demonstrates how a programme, project, policy, product or service is meant to work
  • ‘Concierge’ MVP: testing a programme, project, policy, product or service is meant to work with a single or very few clients
  • ‘Wizard of Oz’ MVP: clients believe to be interacting with an actual service or product, which is in fact only simulated by humans

AN EXAMPLE 

In Papua New Guinea, UNDP tested whether a low-cost tool can help the government address corruption and mismanagement, which gobbled up 40% of the country’s annual budget. In 2014, UNDP partnered with local Telecom companies to design a simple SMS-based minimum viable product. The MVP was subsequently tested with 1,200 staff in the Department of Finance, and updated based on user-feedback. Within four months, it led to over 250 cases of alleged corruption under investigation. Based on the uptake of the first version, the service was rolled out to six new departments and 25,000 government officials countrywide in 2015. (2)

For examples from a business start-up perspective see e.g. The Ultimate guide to minimum viable products by Scale up my Business.

———————————————–

This blog post is based on: The lean startup: how constant innovation creates radically successful businesses, Eric Ries, 2011, pp.92-113; Minimum viable product (MVP), Wikipedia; Prototype testing plan, Development Impact & You (DIY)

Sources:

(1) The lean startup: how constant innovation creates radically successful businesses, Eric Ries (2011)

(2) 6 ways to innovate for 2030, Benjamin Kumpf, 19-04-2016; Papua New Guinea: Phones against corruption, UNDP

Categories
Innovation Planning

Innovative programme design

It has always been clear to me: Monitoring and Evaluation depends to a large extent on how development or government programmes are planned and designed. That is why good M&E is strongly linked with good planning and design. In a way, programme design is a natural extension of our skill set in M&E.

That is why this blog post looks closer at the rapidly evolving landscape of options for innovative programme design.

DRIVERS OF INNOVATION

In my view, we have recently seen such a push for innovation in programme design for three reasons:

  • Disillusion with linear models: Linear planning tools such as logical frameworks and linear theories of change are ineffective in a more complex, complicated or chaotic context. If you are not yet convinced, look at Ben Ramalingam’s comprehensive critique of current planning models (1).
  • Complexity and wicked problems: Genuine progress toward sustainable development is increasingly complex: solutions are not simple or obvious, those who would benefit are the ones who most lack power, those who can make a difference are disengaged, and political barriers are too often overlooked. Look for example at the Manifesto ‘Doing Development Differently’ manifesto from 2014. That is why programme design must increasingly cope with ‘wicked’ problems: problems that are difficult to define clearly, are deeply interconnected, and driven by many factors and unforeseen events.
  • Data revolution: The world has been – and still is – undergoing a data revolution with far-reaching consequences for programme design: We now live in a world where 90 percent of the data out there today has been created in the last two years alone. Every minute, more than 270,000 tweets get published worldwide, Google receives no less than 4 million search queries, and over 200 million emails are sent.(2)

CHALLENGES TO INNOVATION

Despite this push, development organizations and governments have mostly kept more innovative programme designs contained in ‘innovation labs’ and small pilot projects. There are significant challenges to innovative programme design. Three reasons stand out in my view:

  • Mind set: Many of these innovations require ‘unlearning’ of the traditional, linear design approach based on a chain of results.
  • Rules and regulations: Many of these innovations may be difficult to carry out within the current rules and regulations for programme design and implementation.
  • Donor requirements: Testing these innovations may require flexibility by the donor.

ELEVEN OPTIONS FOR INNOVATIVE PROGRAMME DESIGN

In my view, there are eleven concrete options for innovative programme design in complex, complicated or chaotic settings to consider for governments and development organisations:

Sources:

(1) Aid on the edge of chaos: rethinking international cooperation in a complex world, Ramalingam 2013

(2) Can big data help us make emergency services better? David Svab/Brett Romero, 20/04/2016

Categories
Innovation Monitoring and Evaluation

M&E needs to change. We need to change.

Monitoring and Evaluation is an exciting area of work because it is currently undergoing rapid changes.

Countries and organizations are increasingly taking innovative approaches to monitor and evaluate the performance of programmes, policies, services or organizations.

There are several reasons why this is happening.

‘RESULTS’ ARE HERE TO STAY

First, citizens, parliaments and donors are rightly demanding to know what results are being achieved with their money. Today, it has become unacceptable to simply report back on the number of people that were trained. Instead, Monitoring and Evaluation is challenged to provide meaningful information about real changes as the result of a policy, a programme or a service.

INCREASING COMPLEXITY AND SPEED

Second, we live in an increasingly complex and fast-changing environment. We need Monitoring and Evaluation that can capture complex cause and effects and provide quick, real-time feedback to tweak or change a programme, a policy or a service.

ACCELERATING TECHNOLOGY

Third, technology keeps accelerating at an increasing speed. This opens a wide range of opportunities for innovations in Monitoring and Evaluation. We are just only beginning to explore the opportunities of ‘big data’, the ubiquitous use of smart phones, the use of satellite images and drones.

Categories
Evaluation Innovation

Innovation in development evaluation

Development aid is changing rapidly. Development evaluation needs to change with it. Why? And how?

PLANNING IN A COMPLEX, DYNAMIC ENVIRONMENT REQUIRES MORE AND DIFFERENT EVALUATIONS

Linear, mechanistic planning for development is increasingly seen as problematic. Traditional feedback loops that diligently check if an intervention is ‘on-track’ in achieving a pre-defined milestone do not work with flexible planning. In their typical form (with quarterly and annual monitoring, mid-term reviews, final evaluations, annual reporting, etc.), they are also too slow to influence decision-making in time.

A new generation of evaluations is needed – one which better reflects the unpredictability and complexityof interactions typically found in systems, one which gives a renewed emphasis to innovation, with prototypes and pilots that can be scaled up, and which can cope with a highly dynamic environment for development interventions.

Indeed, this is an exciting opportunity for monitoring and evaluation to re-invent itself: With linear, rigid planning being increasingly replaced by a more flexible planning approach that can address complex systems, we find now that we need more responsive, more frequent, and ‘lighter’ evaluations that can capture and adapt to rapidly and continuously changing circumstances and cultural dynamics.

We need two things:  firstly, we need up-to-the-minute ‘real-time’ or continuous updates on the outcome level;  this can be achieved by using, for example, mobile data collection, intelligent infrastructure, or participatory statistics that can ‘fill in’ the time gaps between the official statistical data collections.  Secondly, we need to use broader methods that can record results outside a rigid logical framework; one way to do this is through retrospectively ‘harvesting outcome’, an approach that collects evidence of what has been achieved, and works backward to determine whether and how the intervention contributed to the change.

MULTI-LEVEL MIXED METHODS BECOME THE NORM

Although quantitative and qualitative methods are still regarded by some as two competing and incompatible options (like two-year olds not yet able to play together, as Michael Quinn Patton has formulated in his recent blog), there is a rapidly emerging consensus that an evaluation based on a single method is simply not good enough. For most development interventions, no single method can adequately describe and analyze the interactions found in complex systems.

Mixed methods allow for triangulation – or comparative analysis – which enable us to capture and cross-check complex realities and can provide us with a full understanding, from a range of perspectives, of the success (or lack of it) of policies, services or programmes.

It is likely that mixed methods will soon become the standard for most evaluations. But the use of mixed methods alone is not enough; they should be applied on multiple levels. What we need is for multi-level mixed methods to become the default approach of evaluation, and for the qualitative-quantitative debate to be declared over.

Matrix with examples of qualitative and quantitative methods at different levels
Example of a multi-level mixed method to evaluate a language school

Source: adapted from Introduction to Mixed Methods in Impact Evaluation, Bamberger 2012, InterAction/The Rockefeller Foundation, Impact Evaluation Notes, No. 3, August 201

OUTCOMES COUNT

Whatever one might think about the merits or fallacies of result-based management, development evaluations have to deal with one consequence: A broad agreement that what ultimately counts – and should therefore be closely monitored and evaluated – are outcomes and impact. That is to say, what matters is not so much how something is done (=outputs, activities and inputs), but what happens as a result. And since the impact is hard to assess if we have little knowledge on outcome results, monitoring and evaluating outcomes becomes key.

There is one problem, however: By their nature, outcomes can be difficult to monitor and evaluate. Typically, data on behaviour or performance change is not readily available. This means that we have to collect primary data.

The task of collecting more and better outcome level primary data requires us to be more creative, or even to modify and expand our set of data collection tools.

Indeed, significant primary data collection will often become an integral part of evaluation. It will no longer be sufficient to rely on the staple of minimalistic mainstream evaluations,  the non-random ‘semi-structured interviews with key stakeholders’, the unspecified ‘focus groups’, and so on. Major primary data collection will need to be carried out already prior to or as part of an evaluation process. This will also require more credible and more outcome-focused monitoring systems.

Thankfully, there are many tools becoming available to us as technology develops and becomes more widespread: Already,  small, nimble random sample surveys such as LQAS are in more frequent use. Crowdsourcing information gathering or the use of micro-narratives can enable us to collect data that might otherwise be unobtainable through a conventional evaluation or monitoring activity. Another option is the use of ‘data exhaust’ – passively collected data from people’s use of digital services like mobile phones and web content such as news media and social media interactions.

So there is good reason for optimism. The day will soon come when it is standard practice for all evaluations to be carried out by mixed methods at multiple levels, with improved primary data collection enabling us to evaluate what really counts in our interventions: the outcomes and the impact.

Table:

Eleven innovations potentially useful for innovative development evaluations

Source: Discussion Paper: Innovations in Monitoring & Evaluating Results, UNDP, 05/11/2013,

Note: This blog was originally published in 2014 as a guest blog in BetterEvaluation.