Understanding Children's Heart Surgery Data

Everything else

 

Terms used in this section

  • Survival rate. The percentage of operations where the child survived at least 30 days after their operation.
  • Chance factors. It is impossible to predict precisely what is going to happen in an individual operation. This is partly due to the inevitable inability to predict the future with certainty – all people are physically unique and will react slightly differently to medicines, anaesthetic and surgery, and no heart problem is exactly the same as another. The fact we can’t predict precisely is also partly because there are some factors that we think might influence the outcome of the operation, but these cannot be included in the statistical method either because they are difficult to define or no routine data on them is collected. Together, we call these all 'chance factors'.
 

1. Background to children's heart surgery results

1.1 Why do some children need heart surgery?

Each year in the UK, about 5000-6000 babies are born with a heart defect (called congenital heart disease). Congenital heart disease covers a wide range of problems from the relatively minor (such as a small hole in the heart) to more severe conditions where a child needs specialist hospital care. About half of all children born with a heart defect will need heart surgery at some stage in their childhood. Children can also develop problems with their heart as they grow up which require hospital care (called acquired heart disease).

Read more about different heart conditions and caring for children with heart conditions on the Children’s Heart Federation’s website.

We could do a chart or some sort of visualisation similar to David’s blogs on incidence of CHD (see screenshot below), number of children who will need surgery and number who develop acquired disease...?

1.2 Why are survival rates after children’s heart surgery monitored and published?

In the 1990s, problems were found with the standard of care for children having heart surgery at the Bristol Royal Infirmary. t The proportion of children who died after surgery at Bristol was much higher than other UK hospitals. There was a formal inquiry into what happened (The Bristol Inquiry 2001), which led to a number of changes, including a new compulsory national reporting system. Therefore the percentageof children surviving to 30 days after surgery for all hospitals has been published every year since 2001. If there is evidence that survival rates are lower than expected, they are checked further by the hospital and the national audit body (NICOR).

The UK now has one of the strongest monitoring programmes in the world. Since reporting started, survival rates have been improving and now over 97% of children survive to at least one month after surgery.

possible images

1.3 How are survival rates monitored?

Until 2013, the national audit body NICOR only published survival rates for certain types of procedure because there was no clear way to put overall survival rates for each hospital into context (see “What, Why and How?”). But researchers have now made this possible by creating a statistical formula. Using this, NICOR has published overall survival rates along with the predicted range (dark blue bar) for survival for each hospital since 2013 (see “What, Why and How?”). The predicted range is the range in which we expect to see each hospital’s observed data. We expect it will be in this range the vast majority of the time (eg 19 times out of 20). The predicted range is calculated using the same statistical formula for all hospitals and this prediction is not influenced by what the survival rate at a hospital actually was.

Each year, NICOR publishes a report of survival over the previous 3 years for each hospital in the UK and Ireland. It reports the percentage of children surviving for about 40 common surgical procedures and, since 2013, has been able to also include overall survival for each hospital along with that hospital’s predicted range for survival.

possible images

1.4 Where is the data from?

Each hospital must collect data on every surgery or intervention carried out on a child for heart problems. Every three months, hospitals must submit this data to the national audit body, NICOR (The National Institute for Cardiovascular Outcomes Research). NICOR sets out exactly what data must be collected and each hospital undergoes independent checks of the quality of their submitted data. NICOR also reports to the UK Department of Health, the Care Quality Commission (CQC) and other NHS regulatory bodies.

NICOR tracks the survival of these children by linking to the national register of deaths using patients’ NHS number and also from hospital records. NICOR statisticians then analyse the data every year to enable hospitals and healthcare improvement bodies to monitor and improve the quality of care and outcomes of children who need heart surgery.

possible image – this is hospitals sending excel data to NICOR, ONS sending death data and then NICOR producing report

2. Understand the predicted range

2.1 Why is a survival range predicted for each individual hospital?

Heart disease in children covers a wide range of disorders, from relatively minor to more serious conditions. This affects a child's chances of survival, as do other factors such as age, weight and other health problems.

Some hospitals specialise in certain conditions that are particularly complicated, meaning they tend to operate on children with a lower chance of survival. It would therefore be unfair to expect all hospitals to have the same survival rates each year. Circumstances also change from year to year, so we expect any hospital’s survival rate to vary a bit over time.

The predicted range (dark blue bar) is ......[add definition of predicted range here]

It is based on a formula that uses recorded information about each operation a hospital performed over a 3 year period. Since each hospital operated on different patients, the predicted ranges for each hospital will be different.

This is why it doesn’t make sense to compare raw survival rates to each other. We compare a hospital’s observed survival rate only to its own predicted range and not to other hospitals.

possible images - a snapshot from animation from factors going into the formula/machine and coming out with a predicted range?

2.2 Why do the predicted ranges for each hospital differ in width?

The predicted range for each hospital shows the range where we expect to see the observed data. We expect it will be in this range the vast majority of the time (eg 19 times out of 20), regardless of how many operations it did or what children it treated. The width of the predicted range is influenced by the number of operations a hospital performed; hospitals who do more operations have a narrower predicted range and those that do fewer operations have a wider predicted range. The position of the predicted range is generally determined by which children a hospital treated (i.e. it will be lower if a hospital treated a higher proportion of children with more complex medical problems). The number of operations a hospital performed influences the width of the predicted range, so that hospitals who do more operations have a narrower predicted range and those that do fewer operations have a wider predicted range.

To see why this is the case, consider two example hospitals: Hospital A and Hospital B. Over the same time period Hospital A has done 4 operations and Hospital B has done 15. To avoid over-complicating the example, let’s assume that the mathematical formula predicted the same chance of survival for each child at each hospital: a 97% chance of survival. Because these are small numbers of operations, we can write down all possible scenarios for what could have happened at each hospital and how often we would expect each scenario to play out.

The possible scenarios for Hospital A are shown below.

Although the most likely scenario is that all children survive, the predicted range is set so that we expect a hospital’s observed survival rate (what actually happened) to be inside the predicted range at least 19 times out of 20. From the picture, we can see that the scenario with 4 survivors happens 18 times out of 20, so not often enough to be the only predicted outcome. We need include the scenario of 3 survivors into the predicted range. The predicted range is therefore set to 75% to 100% survival rate (corresponding to 3 to 4 survivors from 4 operations).

Let’s now do the same thing for the 15 operations at Hospital B.

This time we expect the scenario of 15 survivors to happen 13 times out of 20 and the scenario with 14 survivors to happen 6 times out 20, so overall we expect either scenario to happen 19 times out of 20. Thus, for hospital B, the predicted range is 93% to 100% survival rate (corresponding to 14 to 15 survivors).

The two predicted ranges for the hospitals are shown below and you can see that Hospital B has a much narrower predicted range. You could think of this feature of predicted ranges as being a reflection of the fact that the more operations a hospital does, the more information we have to make a judgement of the chances of survival at that hospitals, and so the more precise our predictions can be.

NB for a real hospital, children will not all have the same chances of survival because each child has a unique combination of medical condition, age, weight and other health problems. This makes the calculation much more complex and researchers have been developing and improving this since ..x...

2.3 When looking at just one hospital, what does it mean if its observed survival rate is outside its predicted range?

This is a difficult question and so the answer is a bit long!

There are three steps that may lead to a hospital being outside its predicted range:

  • Step 1: each hospital and the Office of National Statistics supply data on each child to NICOR.Although the data submitted is usually of very high quality, there will always be some mistakes in large and complex datasets. If a hospital submits data where some of the data is very wrong (for instance wrong weights are recorded) or missing, then this will result in a wrong predicted range.
  • Step 2: the statistical formula is then applied to all operations at that hospital to calculate its overall predicted range. Although the statistical formula as is as good as we can currently get it, it is not perfect. There will always be unique features about a child that affect its chance of survival that are not captured by routine data collection and so cannot be part of a formula. We will never be able to capture the whole medical picture of a child in a single formula! So, the predicted range is the best possible guess for what the predicted range should be.
  • Step 3: “what actually happened” for each child is then used to calculate the observed survival rate for that hospital. If the hospital’s data contains no errors and there’s no reason to think that the formula shouldn’t apply well to that hospital, then if the observed survival rate is outside the predicted range, we consider there is some or strong evidence that the chances of survival at that hospital are not as predicted. The strength of the evidence depends on where the observed survival is compared to the extended range. If it is outside the central predicted range (dark blue bar) but inside the extended range (light blue bar) (expected 998 out of a 1000 times) then this is considered moderate evidence. If the observed survival is outside the extended range (only expected to happen 2 times out of a 1000), then this is considered strong evidence.

2.4 When looking at all hospitals, what does it mean if any of the hospitals have an observed survival rate outside their predicted range?

If we were looking at only one hospital, we’d expect its observed survival rate to fall outside its predicted range rarely, only 1 in 20 times.

But, if we are looking at all 14 hospitals at once, we’d actually expect that at least one hospital will fall outside its range just by chance about 8 times in 20! This is similar to the difference between flipping one coin and flipping many: if I only flip one coin there is a 50% probability that I’ll get one head whereas if I flipped, say, four coins in a row, the probability of me getting at least one head in the four throws goes up to 94%.

8 times in 20 means that it is not that rare that any one of these hospitals will have an observed survival rate that falls outside their predicted range.

So, on average, we’d anticipate that about half (8/20) of NICOR’s annual reports to have at least one hospital outside its range, either above or below, by chance alone.

Considering now the “extended predicted range” (light blue bar). If we look at all 14 hospitals at once, we’d only expect one of them to be outside their extended range very rarely, less than 1 time in 20 (actually about 1 time in 30). This is why a hospital’s observed survival rate being outside the extended predicted range is considered strong evidence that the chances of a patient surviving at that hospital are different to what is expected.

2.5 What happens if a hospital’s observed survival rate is outside its predicted range?

For these cases, the NHS and the national audit body, NICOR, want to understand if there is a reason why a hospital is outside of its range.

Because NICOR always looks at all 14 hospitals at once, it is not that rare for any single hospital to be outside its main predicted range but it is rare for any hospital to be outside its extended range (see also 2.4)

If a hospital’s survival rate is below its predicted range (either the main or extended), everyone wants to be sure that there is not a potential problem in the pathway of care. It is important to either rule this out or start to improve care if the national audit body decides that this is the reason.

If a hospital survival rate is below the predicted range, the National Congenital Heart Disease Audit Steering Committee is notified. The Committee in turn notifies the Medical Director and the lead doctor for children’s heart surgery at that hospital and a detailed examination of the hospital’s data takes place (see also 2.3). There are established and published procedures involving the Royal College of Surgeons and/or the Care Quality Commission which can be put into action if the detailed assessment raises concerns about care.

There are three main steps (see also 2.3)

  • Step 1. The hospital is asked to recheck the data it submitted for any errors.
  • Step 2. If corrected data still leads to the hospital being outside its range, analysts check to see whether the hospital treated some children that are unlikely to have had their survival chances accurately predicted by the formula. For instance, if the hospital treated children with particularly complex health problems that are not captured by the formula.
  • Step 3. If the risk adjustment is considered adequate, then the hospital’s process of care is examined. For instance, how are care decisions made? What are the surgical protocols? How is intensive care managed?

The report on individual instances like this would then be published online by NICOR at the same time as the Annual Report.

If a hospital’s survival rate is above its predicted range, we want to see if there is anything we can learn about best practice form that hospital so that it can be shared with other hospitals.

2.6 What is the risk adjustment method used by National Audit?

The National Audit body, NICOR, uses a risk adjustment method developed by researchers at Great Ormond Street Hospital and University College London called PRAiS (Partial Risk Adjustment in Surgery) (see also “What, Why, How” section). The underlying methodology of this method is published in the academic literature if you are interested in learning more details.

picture of formula churning away at PRAIS risk factors?

3. Limitations of these results and the data

cannot think of any image ideas about this!!

3.1 Are there any limitations to risk adjustment?

Yes there are. Risk adjustment allows for fairer assessment of a hospital’s observed survival rate by putting into context (see also “What, Why, How” section), but it still cannot make it completely fair. It is always ‘partial’ as there will always be information about important risk factors that are not routinely collected and so cannot be captured by risk adjustment methods (see also 2.3). Additionally, any statistical formula has to be developed on existing data and so the data will be typically at least a year out of date. So risk adjustment cannot adjust or account for future changes to the way data is collected (for instance more complete data) or new methods of surgical or medical management. Often, these statistical formulas are updated every few years with more up to date (in 2016, we updated PRAiS for the third time).

3.2 How reliable are the data?

The data come from the National Institute for Cardiovascular Outcomes Research (UCL NICOR) which collects national data for the National Heart Disease Audits. All hospitals performing heart surgery in children must submit their data in a standard format to NICOR. All hospitals are independently audited each year to check the quality of the data submitted.

So, the data are of high quality. While no large dataset is perfect (it is inevitable that a few records will not be 100% accurate), this dataset is among the most detailed and complete in the world for children’s heart surgery.

3.3 What are the limitations of the data?

Apart from occasional inaccuracies in the data, there are other limits to what the data can tell us about surgery outcomes. There are risk factors not routinely collected (for instance the size of a hole in the heart) which means these cannot be accounted for in our statistical formula.

These data are also snapshots in time of what happened at each specialist hospital. A number of particularly challenging patients one year (in ways not accounted for in our prediction) or a run of chance factors could cause a very good hospital to have worse outcomes than predicted. So we need to be careful about reading too much into any single time period.

3.4 What about longer term survival and quality of life?

National audit data at the moment (as of 2016) only looks at what happens shortly after surgery. These data cannot tell us about longer term (e.g. 90 day, 1 year or 5 year) survival, or other outcomes such as post-surgery complication rates or the impact of surgery on the child or their family. There is a lot of active research going right now (due to finish around 2018) investigating how to capture, interpret and publish longer term survival and complication rates so hopefully this information will be available in the next 5 years.

The data also can’t tell us about how or why a hospital achieved the recorded results, so it cannot, by itself, tell us whether one hospital offers better or worse quality care than any other. These data cannot tell you what the results are likely to be next year. It also cannot tell us anything about what happens to children who never get operated on for whatever reason, since data on these children is not currently submitted to national audit.

4. My family or child

4.1 Which hospital should I go to?

You can use the national audit data to see how the different hospitals’ observed survival rate compares to their predicted range for a particular time period. You can also use the national audit website to explore how many operations of each type a hospital does and survival outcomes for each of these. However, this cannot, in itself, tell you which hospital you should go to and does not provide proof that one hospital is “better” than any other. The safety or otherwise of a hospital cannot be determined from survival data alone.

When considering which hospital, there are many factors to take into account, including how well the clinical team know your child and his or her medical history, any particular medical issues that your child has (for instance, some hospitals specialise in treating children with a particular problem) and how far the hospital is from your home.

You should discuss your child’s care with their specialist cardiologist to determine what the best treatment option is for your child.

You can also access the support available from national charities such as the Children’s Heart Federation or Little Hearts Matter or local charities for your specialist children’s hospital (hospital map tab for individual hospital charities). These guides on speaking to your child’s surgeon or seeking a second opinion, written by the Children’s Heart Federation, might also be helpful.

4.2 Can the published data tell me about the risks for my child?

No, the published data should not be used like that – the risk for your child will depend on other factors that are not necessarily captured in the national data but that are known to the clinical team treating him or her. Your child’s specialist cardiologist and/or cardiac surgeon will be able to discuss this with you. These guides on speaking to your child’s surgeon or seeking a second opinion, written by the Children’s Heart Federation, might also be helpful.

5. Who developed this site and how

5.1 About us

(use logos for each picture) University College London: Dr Christina Pagel is a Reader in Operational Research (a branch of applied mathematics) at University College London, applying maths to problems in the NHS. She works very closely with doctors and other clinical staff, mainly at Great Ormond Street Hospital, to help them use routinely collected data to improve NHS services. Her work currently focuses on two areas: 1) care for children requiring heart surgery and 2) how specialist intensive care is organised for children who need it (for whatever reason).

Christina helped developed a statistical method called PRAiS to let specialist hospitals and the national audit body easily monitor survival outcomes after heart surgery in children. The UK National Audit Body that monitors paediatric congenital outcomes now uses this method in publishing their outcomes. In this project, Christina worked with Sense about Science, the University of Cambridge and the Children’s Heart Federation to develop these online resources to help people interpret the audit body’s published results.

University of Cambridge: Professor David Spiegelhalter is a statistician from Cambridge University. He has worked for many years with doctors from Great Ormond Street Hospital on monitoring outcomes following surgery for congenital heart disease, and led the statistical team at the Bristol Royal Infirmary Inquiry. He is particularly interested in transparent communication, and was part of the team that drew up the new patient information leaflets for breast screening. Mike Pearson…

King’s College London: Dr Tim Rakow… Dr Emily Blackshaw….

Sense About Science, a U.K.-based charity, works to put science and evidence in the hands of the public. They are a source of information, challenge misinformation, and champion sound science and evidence with the help of scientists, academics, and experts in various fields. They facilitated the workshops with parents, other interested users and members of the public who helped to co-develop the website. www.senseaboutscience.org

The Children’s Heart Federation is the leading UK children’s heart charity and the main umbrella body for British CHD charities and voluntary organisations. They publicised this project among their members and coordinated the involvement of parents in our workshops.

5.2 How we developed this site

Yet to be written…

6. Further resources about understanding clinical data

Suggestions?