Uncertainty in Science

As of the 23rd May 2022 this website is archived and will receive no further updates.

understandinguncertainty.org was produced by the Winton programme for the public understanding of risk based in the Statistical Laboratory in the University of Cambridge. The aim was to help improve the way that uncertainty and risk are discussed in society, and show how probability and statistics can be both useful and entertaining.

Many of the animations were produced using Flash and will no longer work.

DJS, Times, March 21st 2010

A popular view of scientists is that they deal with certainties, but they are (or should be) the first to admit the limitations in what they know. But can scientists admit uncertainty and still be trusted by politicians and the public? Or would the language of possibilities and probabilities merely shift attention to those with more strident, confident arguments.

Nobody is expected to exactly predict the future. So there’s generally no problem in acknowledging the risk of common activities and it’s natural to use past experience to be open and precise about the uncertainties. Patients may, for example, be told that out of 1,000,000 operations there’s expected to be 5 deaths due to the anaesthetic alone – that’s an anaesthetic risk of 5 micromorts (a 1-in-a-million chance of dying) per operation. They won’t be told that this is roughly equivalent to the risk of riding 30 miles on a motorbike, driving 1000 miles in a car, going on one scuba-dive, living 4 hours as a heroin user or serving 4 hours in the UK army in Afghanistan.

In more complicated situations, scientists build mathematical models that are supposed to mimic what we understand about the world. Models are used for guiding action on swine flu, predicting climate change, and assessing whether medical treatments should be provided by the NHS. Statisticians like me try to use past data to express reasonable uncertainty about the size of various quantities in the model, known as parameters, and may also have doubts about what the structure of the model might be.

Take a wonderfully trivial example. This very newspaper reported on February 2nd that someone had found all 6 eggs in a box had double-yolks and this had a ‘1-in-a-trillion chance’. The calculation behind this was explained to a struggling John Humphreys on Today – the man from the Egg Council said that 1 in 1000 eggs had double-yolks, and so the chance of all 6 in a box being double-yolkers was 1 in 1000 x 1000 x 1000 x 1000 x 1000 x 1000.

This aroused my suspicion. First of all, this is not a trillion. Second, if this really were the true chance, then even though the UK consumes an artery-clogging 11 billion eggs a year, we would expect to wait 500,000,000 years before this remarkable event occurred. So what has gone wrong with this model? It turns out that egg-world may not be so simple. Double-yolkers are far more common in certain flocks, and so there should be uncertainty about the ‘1 in 1000’ parameter. Second, eggs in a box tend to come from the same source, and so once one double-yolker is found, it increases the chance that the rest will match. So there is uncertainty about the structure of the model too.

Acknowledgement of parameter and structural uncertainty has become common in climate and other models. But there is a further level of uncertainty: of unforeseen surprises, Black Swans and Rumsfeldian unknown unknowns. There should always be a suspicion that there’s more going on that we can express in mathematics. I am talking about this today at a Royal Society meeting on Uncertainty in Science, and will proudly reveal that I simply went to Waitrose and bought a box marked ‘double-yoked eggs’. £2.49, and certainly not a 1-in-a-trillion chance. Until tipped off, it had never crossed my mind that double-yoked eggs can be common and can be detected, selected and packed up at will.

The moral of ‘egg-gate’ is a famous quote from a statistician, George Box: ‘all models are wrong, but some are useful’. Models are not the truth – they are more like guide books, helpful but possibly flawed due to what we don’t know. Owning up to such ignorance is finally getting its due attention, although back in 1937 John Maynard Keynes, when talking about predictions of what would be happening in 1970, wrote “about these matters there is no scientific basis on which to form any calculable probability whatsoever. We simply do not know.”

So what is a scientist to do when they aren’t certain and there’s a lot they don’t know? Just shrug and give a teenage grunt? A Chief Scientific Advisor adopting this approach is unlikely to stay in their job for long. But there are ways of showing that doubt. The Monetary Policy Committee of the Bank of England make projections for inflation and change in GDP, providing a nice visual spread of possibilities as a ‘fan chart’ but reserving a 10% chance for going outside that range,with a huge white void on the chart where anything might happen. And it did: the projections made in 2007 were wildly wrong. Maybe this unmapped region should be labelled “here be dragons”.

Another approach is to be ‘better safe than sorry’. In July 2009 the Department of Health made a ‘worst case scenario’ planning assumption of 65,000 swine flu deaths by assuming every unknown quantity in the model was at its worst possible value. There have been 457 deaths to date. Such a super-precautionary approach can be expensive both in resources, does little for scientific reputation, and may damage the response to a really serious pandemic.

In 2007 the Intergovernmental Panel on Climate Change said it had ‘very high confidence’ that man has caused global warming, which they interpreted as having at least 9 out of 10 chance of being correct. They therefore must feel they have around a 1 in 10 chance of being wrong. This seems a fair and open judgement, but has been generally ignored in the increasingly polarised arguments.

It would be nice to think that scientists could be upfront about uncertainty, with due humility and not feeling they have to put everything into precise numbers. Robust decisions can still be made. Acknowledgment of uncertainty may even, apparently paradoxically, increase public confidence in pronouncements. Recent events, whether the justification for the Iraq War or climate disputes, have again reinforced that trust is the crucial factor in gaining public support. But this may be even more difficult to achieve than is certainty.