"Official": it does matter how risk statistics are presented
The respected and influential Cochrane Collaboration has just published a systematic review of research on different ways of presenting risks and reductions in risk in a health context. It won't come as a surprise to regular readers of this website that they concluded that some aspects of the presentation really do make a difference. But maybe there are a few surprises in their detailed findings.
The Cochrane Collaboration is an independent international organisation that prepares and publishes systematic reviews - Cochrane Reviews - of the effects of interventions of many kinds on health outcomes. These reviews are published in the Cochrane Library, which is available free on the Web in the UK and many other countries. Cochrane Reviews, despite sometimes provoking controversy, are widely seen as the gold standard for putting together research on an intervention into a useable evidence-based form.
Most Cochrane Reviews deal with what you might think of as the standard type of medical treatment or intervention. For instance, the latest batch to be published includes a review comparing two drugs (artesunate and quinine) in the treatment of severe malaria, and another investigating the effect of zinc on the symptoms of the common cold. But there is also a review of something much closer to the heart of this website. It's entitled "Using alternative statistical formats for presenting risks and risk reductions" and was produced by a team from Canada, the USA, Norway and Italy.
As with all Cochrane Reviews, this one contains a "plain language summary" of its findings, which makes a pretty good job of explaining what they found, in brief. You might be happy reading just that summary - if not, read on, and I'll give you my take on it.
The review put together evidence from published research on four specific questions.
First, do people understand statements about risk better when they are expressed in terms of probabilities, or in terms of actual numbers of people ("natural frequencies")? For instance, is it better in terms of understanding to say that a drug has a 5% probability of leading to a particular side effect, or to say something like "Out of 20 people like you taking this drug, we'd expect one to have this side effect"?
Here, the review found pretty clear evidence that natural frequencies are better understood by members of the public. We've reported this kind of finding on the website before, so it doesn't come as a surprise.
The other specific questions that they looked at were all to do with how to present the changes in risk, particularly reductions in risk, due to some change in behaviour, treatment, or other intervention. Keen followers of UU will know all about the issues from our animated article on 2845 ways to spin the Risk. That article looks at different ways of presenting the effect of eating bacon sandwiches on bowel cancer risk, and the benefits of taking statins for 10 years in reducing the risk of a heart attack or stroke. I'll use the statins example here, because it's about a reduction in risk and thus matches what the reviewers did. The Cochrane Review compared the reporting of relative risk reduction (RRR) and of absolute risk reduction (ARR). In the statins example, putting it in terms of relative risk reduction would involve saying something like "Statins reduce your chance of experiencing a heart attack or stroke in 10 years by 20%." In absolute risk reduction terms, you'd say instead that "Your chance of experiencing a heart attack or stroke in 10 years without statins is 10%, which is reduced to 8% with statins."
The Cochrane reviewers compared RRR and ARR in three ways. First, they looked at how well people understood them. They found no evidence of a difference in understanding. I have to admit this finding surprised me (but if systematic reviews never surprised anyone, there wouldn't be much point in doing them). It was based on only three published studies, so perhaps further research might make things look different (or perhaps it might not).
Next, they investigated how people perceived the risk reduction. There was some evidence that people perceived the risk reductions as greater when they were presented in relative rather than in absolute terms. This one doesn't surprise me. "Statins reduce your chances of a heart attack or stroke by 20%" somehow sounds more dramatic to me than saying that the chance falls from 10% to 8%, and I'm a statistician who shouldn't be swayed by such details.
Last, the reviewers looked at how persuasive the presentation of risk reduction was. For instance, would people be more likely to decide to take statins if the heart attack risk reduction were presented in relative or in absolute terms? They found moderately clear evidence that relative risk reductions were more likely to be persuasive.
I find this hardly surprising either, given the previous finding. If people think the reduction in risk is greater, when it's presented as RRR, then it makes sense to me that RRR presentation is more likely to make people change what they do. But, as the reviewers themselves point out, maybe the RRR presentation has actually exaggerated the risk, and thus maybe the behaviour or treatment change is not what people would really want to happen. More on this below.
The other two specific questions considered in the review compare, first, the presentation of risk reduction in relative terms (RRR) with presenting it as "number needed to treat" (NNT), and second, a comparison of absolute risk reduction (ARR) with NNT. In the statins example, a statement in NNT terms would be "In order to save one person from experiencing a heart attack or stroke in 10 years, we would need to treat 50 people like you with statins." As in their comparison of RRR with ARR, the reviewers made these comparison in terms of understanding, perception and persuasiveness.
The reviewers found moderate evidence that people understand both RRR and ARR better than they understand NNT. In perception, they conclude that people think the risk reduction is larger when it's presented either as RRR or as ARR, than when it's presented as NNT. Finally, on persuasiveness, there was some evidence that RRR is more persuasive than NNT, but no such evidence of a difference in persuasiveness for ARR and NNT. (You might want to explore how these findings fit your own views of the different ways of presenting risks, using our article on 2845 ways to spin the Risk.)
That's not the end of the story, however. In reporting their evidence that RRR leads to bigger perceptions of risk reduction that either ARR or NNT, and that it is more persuasive, the reviewers write "However, it is uncertain whether presenting RRR is likely to help people make decisions most consistent with their own values and, in fact, it could lead to misinterpretation. More research is needed to further explore this question." They spell out in some detail in their Conclusions section where they believe the research gaps are. I wouldn't disagree with any of this. Let's see where it leads...