How can 2% become 20%?

As of the 23rd May 2022 this website is archived and will receive no further updates.

understandinguncertainty.org was produced by the Winton programme for the public understanding of risk based in the Statistical Laboratory in the University of Cambridge. The aim was to help improve the way that uncertainty and risk are discussed in society, and show how probability and statistics can be both useful and entertaining.

Many of the animations were produced using Flash and will no longer work.

The Daily Mail headline below is unequivocal – statins cause a 20% increase in muscle problems.

statin-muscles-mail.jpg

Unfortunately, the ‘20%’ is factually incorrect - the study on which this story is based claims that taking statins increased the risk of muscle problems from 85% to 87%. And even that claim is highly dubious. How can the Daily Mail get it so wrong?

The ‘20%’ is a basic statistical error promoted by a misleading abstract and press release from JAMA Internal Medicine – associated with the Journal of the American Medical Association, a (supposedly) reputable source. The authors estimated an ‘odds ratio’ of 1.19 for muscular-skeletal problems, which the Daily Mail interpreted as a 20% increased risk. I’m afraid we need to get a bit technical now. An odds ratio is a standard measure that statisticians and epidemiologists (yes, them again) use to measure an association between an exposure (here statins) and an event (muscle problems). It is defined as the odds of the event given the exposure, divided by the odds without the exposure. The crucial thing is the use of odds, not risk, where odds is the probability of the event divided by the probability of the event not occurring (why statisticians should use this bizarre measure is another story – see for example this Wikipedia description).

Table 4 of the paper (not reported in the abstract) reports risks with and without statins of 87% vs 85%, which translate to odds of 0.87/0.13 = 6.7 and 0.85/0.15 = 5.7. The odds ratio is therefore 6.7/5.7 = 1.18 (their figure of 1.19 involved some adjustment for other factors). Alternatively, the risk ratio was 0.87/0.85 = 1.02, a 2% relative change, while the difference in absolute risks was 0.87 – 0.85 = 2%. The Code of Practice for the British Pharmaceutical Industry has banned the reporting of relative risk without also giving the change in absolute risk. Why this is still considered acceptable within epidemiological papers is beyond me.

And such a tiny difference, in a very common problem, could be due to all sorts of confounding factors that were not allowed for. In particular, people on statins are likely to visit their doctor more, who may then investigate other symptoms, as the authors admit in their discussion. So they have not shown this difference was due to statins.

It is difficult to know who is most to blame here – the authors for producing a misleading abstract without the key information, JAMA Internal Medicine, or the Daily Mail. Personally, I feel that JAMA Internal Medicine is most responsible, for not properly refereeing the paper, and producing a press release that invited misunderstanding and distortion.

Comments

I would never use ORs in a press release - it is opening the door to misinterpretation. A press release goes to such a wide variety of journalists that you cannot expect them all to know the difference between a RR and an OR. Journalists and the public are used to think in terms of Relative Risk. However, if a press officer is going to exclude using an OR, then that requires working with the author to substitute something acceptable (real numbers, for example, which may not be easily available). As JAMA is a big, well-organised press office, they should have known better, but perhaps the authors also need to intervene if they see scope for misinterpretation.