Comparing hospitals

As of the 23rd May 2022 this website is archived and will receive no further updates.

understandinguncertainty.org was produced by the Winton programme for the public understanding of risk based in the Statistical Laboratory in the University of Cambridge. The aim was to help improve the way that uncertainty and risk are discussed in society, and show how probability and statistics can be both useful and entertaining.

Many of the animations were produced using Flash and will no longer work.

DJS, Times, 30th November 2009

Will you be safe in the hands of the St Helens and Knowsley Hospitals NHS Trust? Well it depends what you read. If you consult the latest Dr Foster Hospital Guide, apparently you will be in one of England’s most unsafe hospitals. But the website of the official NHS regulator, the Care Quality Commission (CQC), says your hospital is rated as “Excellent” for Quality of Services. I’m a statistician whose methods are used by both Dr Foster and the CQC, and I’m confused, so heaven help the poor patients in St Helens.

Hospitals are not football teams that can easily be ranked in a league table, and measuring safety is complex and open to manipulation. That great statistician, Florence Nightingale, returned from the Crimea 150 years ago and instituted the first comparative audit of deaths in London hospitals, but in 1863 she wrote resignedly “we have known incurable cases discharged from one hospital, to which the deaths ought to have been accounted, and received into another hospital, to die there in a day or two after admission, thereby lowering the mortality rate of the first at the expense of the second”.

But how in modern times can two organisations come up with such different conclusions? The CQC’s rating depends partly on meeting targets which, whether you like them or not, are at least fairly measurable, but the “Excellent” for St Helens also means compliance with ‘core standards’ set by the Department of Health. These include, for example, the eloquent safety standard C01b (take a deep breath) “Healthcare organisations protect patients through systems that ensure that patient safety notices, alerts and other communications concerning patient safety which require action are acted upon within required timescales”.

Three thoughts spring to mind. First, who writes this stuff? Second, this is a measure of organisational process, and we have no idea whether it will prevent any actual accidents. Third, hospitals self-assess their compliance with these standards, just like a tax self-assessment form. It’s then up to the CQC to cross-check the claim against relevant bits of a vast mass of routine data, including patient complaints - the 10% of trusts which are found to be at most risk of ‘undeclared non-compliance’ (fibbing, in normal language) then get inspected. A random selection of hospitals gets inspected as well, and those that are caught out get ‘fined’ rating points.

It’s rather remarkable that this country has led the world in introducing an automated risk-based inspection for hospitals, similar to the way that the Inland Revenue screen tax self-assessments. But just as light-touch regulation of the financial world has for obvious reasons got itself a bad name, there is now likely to be a change in regime for hospitals.

In contrast to CQC, Dr Foster don’t do inspections and use few measures of process – their ratings are mainly driven by statistics. In particular, 6 of the 13 safety indicators concern death rates, in which the observed numbers of deaths are compared to the number that would be expected given the type of patients being treated.

Simply counting the bodies at first seems the obvious way to measure hospital quality. Certainly some dramatic improvements in death rates have been reported from hospitals in the news: Mid-Staffordshire has gone from 27% excess mortality in 2007 to 8% savings in deaths in 2008, while Basildon and Thurrock had 31% excess in the year up to March 2009 but now claim to be average. Maybe these hospitals really have suddenly started saving a miraculous number of lives. But in-hospital standardised mortality rates might also be lowered, quite appropriately, by accurate use of the code ‘admitted for palliative care’ (which increases the expected number of deaths), and sensitive movement of some terminally-ill patients to die out of hospital. We do not have to be as sceptical as Nightingale to realise that death rates are more malleable than we might think, and are a very blunt instrument for measuring quality.

Dr Foster and CQC essentially get different ratings because they choose different indicators to put into a summary formula. But does it make any sense to produce a single rating for a complex institution like a hospital? Just like two football teams that are one point apart in the championship, the result can be swayed by trivial events: a few years ago my local Cambridge hospital dramatically dropped from 3 stars to 2 stars under the old star-rating system, and forensic analysis revealed this was due to just 4 too few junior doctors out of 400 being signed up to the New Deal in working hours.

It’s clear that naming, blaming and shaming gets headlines, which produces urgency and attention in hospital board rooms, and will have contributed to the little-reported 60% fall in both MRSA and C Difficile rates over the last 2 years. But trying to produce a single measure of ‘quality’ will inevitably lead to the sort of contradictions we’ve seen last week.

Anyway it’s all up in the air now. From next April each hospital will have to release its own ‘Quality Account’ that reports on local priorities for improvement – fine for local accountability, but someone also has to be making national comparisons and rapidly detect safety lapses using current centralised information. Doubtless new inspection methods will be developed by CQC: pre-announced formal inspections encourage as much careful preparation as royal visits, and so we might expect more roaming gangs of unannounced inspectors.

And the patients at St Helens need not worry: closer examination reveals that their low rating by Dr Foster is largely driven by some missing data on safety reporting. But nobody reading the headlines will have realised this. The CQC is no longer legally obliged to publish an overall rating, so let’s hope we can get away from over-simplistic and unjust league tables.

David Spiegelhalter is Winton Professor of the Public Understanding of Risk at the University of Cambridge. He has collaborated on statistical methods that are used by both the Care Quality Commission and Dr Foster Intelligence.

Free tags: