Handling uncertainty about climate change

As of the 23rd May 2022 this website is archived and will receive no further updates.

was produced by the Winton programme for the public understanding of risk based in the Statistical Laboratory in the University of Cambridge. The aim was to help improve the way that uncertainty and risk are discussed in society, and show how probability and statistics can be both useful and entertaining.

Many of the animations were produced using Flash and will no longer work.

The Inter-Academy Council (IAC) recently produced a report on the workings of the Intergovernmental Panel for Climate Change (IPCC) which got a lot of publicity, but almost no coverage was given to what the IAC said about the IPCC's way of handling uncertainty. It makes interesting reading.

In preparation for the Fourth Assessment Report (AR4) of the IPCC in 2007, an attempt was made to standardise the expression of uncertainty across the three Working Groups. Lead authors were advised to consider plausible sources of uncertainty and assess the current level of understanding of key issues using the qualitative scale shown in Table 1.

Table 1. Qualitatively defined levels of understanding recommended for use of Working Groups of the IPCC.
Amount of evidence (theory, observations, models)
High agreement, limited evidence ..... High agreement, much evidence
Level of agreement or consensus ..... ..... .....
Low agreement, limited evidence ..... Low agreement, much evidence

The guidance follows the work of Risbey and Kandlikar in recommending that the precision of any statement about an unknown quantity should depend on the quality of the available evidence, and that numerical probability statements should only be made about well-defined events and when there is ‘high agreement, much evidence’. They distinguish between ‘likelihood’ defined as a ‘probabilistic assessment of some well defined outcome having occurred or occurring in the future’, and ‘levels of confidence’ ‘based on expert judgement as to the correctness of a model, an analysis or a statement’. Tables 2 and 3 provide a mapping between linguistic terms and numerical values for likelihood and confidence .

Table 2: Likelihood scale recommended for use of Working Groups of the IPCC.
Terminology Likelihood of the occurrence/ outcome
Virtually certain > 99% probability of occurrence
Very likely > 90%
Likely > 66%
About as likely as not 33% to 66% probability
Very unlikely
Exceptionally unlikely

Table 3: Quantitatively calibrated levels of confidence recommended for use of Working Groups of the IPCC.
Terminology Degree of confidence in being correct
Very high confidence At least 9 out of 10 chance of being correct
High confidence About 8 out of 10 chance
Medium confidence About 5 out of 10 chance
Low confidence About 2 out of 10 chance
Very low confidence Less than 1 out of 10 chance

However, as reported in the critique of the IPCC by the Inter-Academy Council, the Working Groups in the Fourth Assessment Report were not consistent in their use of this guidance. Working Group 1 (The Physical Science Basis) made extensive use of formal models which allowed representations of between-model variability in projecting key quantities as well as overall uncertainties expressed as probability distributions and confidence intervals. Overall conclusions were qualified with a mixture of likelihood and confidence statements, the choice of which sometimes appears somewhat arbitrary: compare “.. very high confidence that the global average net effect of human activities since 1750 has been one of warming” and “Most of the observed increase in global average temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations”. Working Group II (Impacts, Adaptation and Vulnerability) primarily used the confidence scale (Table 3), but the IAC report criticised the use of this numerical scale for conclusions that were vaguely worded or based on weak evidence. Working Group III (Mitigation of Climate Change) only used the level-of-understanding scale (Table 1). All Working Groups conditioned on a list of emission scenarios which were not given probabilities.

The IAC concludes that the IPCC Uncertainty Guidance was a good starting point but unnecessary errors arose from its inconsistent use, such as the expression of ‘high confidence in statements for which there is little evidence, such as the widely-quoted statement that agricultural yields in Africa might decline by up to 50 percent by 2020’ (IAC) - which is in any case a fairly vacuous statement. They recommend that future working groups should use the level-of-understanding scale (Table 1) supplemented by quantitative probabilities (Table 2) if there is sufficient evidence, traceable accounts should be provided for expressions of scientific understanding and likelihoods, and that the numerical confidence scale (Table 3) should be abandoned.

Personally, I think the IAC's comments are very appropriate. As argued in the excellent report by Morgan and colleagues on handling uncertainty in climate change, numerical probabilities should be reserved for things that are at least potentially determinable in the future.

I also agree with the IAC that a qualitative assessment of the strength of the underlying science is valuable, but I am not convinced by the scale suggested in Table 1. The world of evidence-based medicine has long wrestled with the problem of distinguishing a numerical estimate of the average effect of a treatment in a population from the quality of the evidence underlying that estimate - for example a large but poorly conducted analysis of a data-base may end up with a tight confidence interval but still not be convincing. The GRADE scale shown in Table 4 is widely used by the Cochrane Collaboration and others as a measure of the quality of the underlying evidence, and provides pragmatic definitions in terms of the possibility of further change.

Table 4: GRADE scale used by Cochrane Collaboration and others.
Quality of evidence Definitions
High quality Further research is very unlikely to change our confidence in the estimate of effect
Moderate quality Further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate
Low quality Further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate
Very low quality Any estimate of effect is very uncertain

Maybe the IPCC, and other people doing risk assessments on the basis of incomplete information and inadequate knowledge, could learn something from evidence-based medicine?



It would be very useful if the UK Government similarly acknowledged uncertainty in their guidance on preparing Community Risk Registers (CRRs). Under the Civil Contingencies Act (2004), Local Resilience Forums (consiting of Local Authorities and the Emergency Services) have to produce a CRR for their area, giving the likelihood and impact of events ranging from extreme weather to collapsing buildings and pandemics. Both likelihood and impact are rated on a five-point scale with the lowest likelihood rating being a "1 in 20 000 chance over 5 years." Can people really estimate probabilities of these magnitudes without any uncertainty?