Once we have agreed on a clever, limited set of indicators, all seems good. However, we are not yet there. For most indicators, we can choose a specific ‘angle’.
What do I mean?
For example, we are considering the “Number of female members of Parliament” as an indicator. Which may be fine. However, we may prefer the “Proportion of parliamentarians that are women” instead (and often is preferable). On the other hand, we may want to compare it with the % of voters that are women. Or we want to capture the change over time – or even the funds invested in political leadership by women.
Soooo many options. But we need to get the angle right.
In the age of open data, indicators must be clear, credible and proportional.
It is a game changer for M&E: Communication skills using indicators suddenly become a core qualification.
More and more organizations make their data on aid, development and humanitarian flows accessible online as a result of the International Aid Transparency Initiative (IATI). This year, large donors like the UNDP even add project indicators, baselines, targets and status data available online. This is a very good reason to look hard at the indicators we are using. Anyone on this planet with internet access can look at our data. The times when M&E was seen as a highly technical speciality are gone. Development lingo, awkward formulations and technical expressions do not work anymore in the age of open data. M&E specialists now – also – need to be excellent communicators.
That is why indicators in the age of open data need to be – apart from technically sound:
The user of our indicator data – clients, journalists, academics, donors – data need to understand what we say. We should use clearer language, avoid technical expressions or abbreviations, and add background information so that the indicator can be understood if looked at in isolation.
There will be a much tougher scrutiny of our indicator data. While indicators in the past, let’s be honest, were looked at by very few people directly involved in an intervention, a much larger group of people – some of them very critical of a programme – will look very carefully at our data. And since the net never forgets, our data will be stored basically forever. This requires us to be much more certain that we can back up all data with rock-solid, credible evidence.
When linking up indicator data with the equally publicly accessible budget and expenditure reports, anyone with a calculator can do rough value-for-money calculations: This project raised 5000 people out of poverty, but it did cost 10.000 USD per person. That intervention had 700 schools built for 25.000 USD each. These calculations are valuable, but we need to ensure that the indicators we pick properly capture the key outputs or outcomes of an intervention. In short: Indicators and data need to be proportional to the funds used.