Why do we use JMP estimates on WASHwatch?

If you look at the bottom of any country page in WASHwatch, you’ll see two numbers, giving the percentage of the population of that country who have access to water and the percentage who have basic sanitation.

These numbers have recently been updated (we will be updating the site shortly – watch this space!). And this has led to much discussion about the merits of these estimates. Ian Ross has a very thorough blog-post arguing the merits of the JMP estimation method.

I’ll not rehearse his arguments here, and I don’t disagree with what he says, but I’ll expand on two key questions I think his post has raised.

The main criticisms of global estimates are

1) that they are useless – you can’t make WASH investment or policy decisions in Uganda based on JMP estimates*

2) that they are extractive – they require costly data collection but are of no value to the people who provide the data

These are both valid points, but they are not unanswerable.

Do we need global numbers?

The point of JMP estimates is to be able to compare levels and trends between countries; if that is not desirable, the details of the method are immaterial – we should just stick with country level information.

WASHwatch.org is based on the premise that inter-country comparison is of enormous value. That highlighting where a country is performing poorly compared to its neighbour can spur that government to greater action.  That identifying the countries and regions of greatest need can allow international aid to be well targeted, and help campaigners to know whether it actually is.

Whether donors choose to use the data to best effect is a secondary question – we can’t know if they are off target, or demand that they do better, if the comparable data on need do not exist.

The usefulness of the data is only a problem if you are looking at global estimates as a service provision tool, akin to Water Point Mapping or Utility data. But this is not their purpose – these data are political. They tell us (comparably!) where the need is high, and where changes have been slow to materialise.

They also provide a benchmark. If there is a wide difference between a recent survey and the JMP estimates, this provokes a question – has a recent government initiative had a massive impact? Has a terrible disaster destroyed a lot of services? The difference from the normal is thrown into relief and invites a closer look.

So yes! We need these numbers.

Can we afford these numbers?

One of the most important features of the JMP estimates is that they are incredibly cheap – all the data on which they are based is already being collected, for national purposes. All that has to be done is to ensure that wherever a survey looks at water and sanitation it includes a few basic, comparable questions, and to collect all the surveys that have ever been taken and extract the relevant data.

This is not easy, but it’s a hell of a lot easier than going annually to every country in the world and observing every water connection. This is a good example of ‘aggregating up’ to form global estimates, rather than modelling global estimates and then generalising down to country level.  For a reasonable, methodologically transparent and consistent estimate of global WASH poverty, the JMP data is an absolute bargain!**

As Ian and others commenting have pointed out, we need different data for different purposes, and we must cross-reference and triangulate as much as possible to ensure reliability, precision, and to draw the most effective policy conclusions. But throwing out a cheap and politically useful piece of global analysis because it doesn’t do everything we like, would be a terrible folly.


* This seems to be the crux of the objections around functionality – the surveys do measure functionality in the sense that the source currently being used must be functional, but they don’t account for whether the same source will still function in six months.

**  It is worth noting that the only affordable way for JMP to provide more credible estimates that take functionality, quality, proximity and all the other facets of access into account is for these data to be collected at national level. They are advocating for this in the post-MDG monitoring proposals, but they cannot be expected to collate and analyse data that no-one is yet collecting, and to collect it themselves would be exactly the kind of  eye-wateringly expensive and extractive study we want to avoid.


WASHwatch.org is not the last word – it is always a work in progress.

Our goal at WASHwatch is to provide a complete picture of national water and sanitation policies through measuring progress made by government towards meeting their policy and budgetary commitments. Were this simple, it would already have been done. Unfortunately, the necessary data is not always easily accessible, if at all. So how do you analyse data that is not available? You have to generate it!

The data available at WASHwatch can be thought of as an ongoing survey of how WASH stakeholders and advocates view their government’s progress. One common worry we have encountered is that as the data is crowdsourced (and therefore has not been independently checked for its validity), it might not be accurate. We do not see this as a challenge but rather an opportunity. Often when people think of data they think of hard data, that is, quantifiable data put forth as indisputable fact.  Hard data is certainly valuable but there is also a role to be played by hard data’s softer counterpart. Soft data is more subjective, a collection of anecdotes, surveys and opinions. Both hard and soft data add value and the two often work together, complementing each other to tell the full story.

The data available at WASHwatch is subjective and as with any survey data, the value is all in the analysis. It is not because survey respondents all agree about something that it is necessarily true but their agreement (or disagreement) tells a story. “Good progress” will undoubtedly mean different things to different people. Not all of the contributors will have access to the same information. Some contributors might have a rural perspective and others a more urban perspective. However, it is precisely this array of perspectives that enriches the WASHwatch data. Consensus over a country’s high score indicates more than just progress, it indicates a certain level of transparency because multiple stakeholders are aware of what the government is doing. Similarly, a government scoring poorly does not necessarily mean that no progress has been made but that the government in question has not been very transparent about their successes (or their failures). Although not always explicitly stated, it is certainly in the spirit of the Sharm el-Sheikh, eThekwini, SACOSAN for governments to improve transparency surrounding water and sanitation programming, policy and funding. After all, what good is a national sanitation policy if no one is aware it exists and no one can monitor whether or not it is being implemented?

WASHwatch contributors are encouraged to comment on previously uploaded data. Like any good survey, one response simply won’t do and the more the merrier! In order to generate reliable data, we need responses from multiple sources. WASHwatch data is constantly changing, reflecting changes in government actions and policies.  When governments make progress or start to fall behind, WASHwatch provides a platform for people to report these changes in real-time. Like policy, policy monitoring is an active and dynamic process.  WASHwatch provides a platform to collect and share data about  water and sanitation policy commitments. However, WASHwatch is not the last word. It, like the government policies we monitor, is a work in progress.

Join the discussion at WASHwatch


Katelyn Rogers