Screening for disease: why it's controversial

As of the 23rd May 2022 this website is archived and will receive no further updates. was produced by the Winton programme for the public understanding of risk based in the Statistical Laboratory in the University of Cambridge. The aim was to help improve the way that uncertainty and risk are discussed in society, and show how probability and statistics can be both useful and entertaining.

Many of the animations were produced using Flash and will no longer work.

Screening for disease was in the news again in the UK last week. According to the BBC, a 20-year Swedish study of screening for prostate cancer showed that screening brought no benefit. (The actual study report didn't put it quite so baldly, but effectively did conclude there was no benefit.) This came just a couple of days after the Alzheimer's Disease Society asked that the NHS should offer checks for dementia to everyone (in the UK) when they reach the age of 75. Both these news items reported contrasting views on whether these screening checks are in fact advisable.

Why is that? You might think that it's surely better to know whether someone has a disease than not to know, and if some sort of screening or check can give this information, well, why not just do it?

The trouble is that it's nowhere near as straightforward as that, for several reasons. There are several articles about screening on this site, specifically on screening for dishonesty, for HIV and for breast cancer. They all demonstrate an important reason (in my view) why screening gets controversial: it involves uncertainty, and that uncertainty is often misunderstood. A good screening test may give the right result in most people who do have the disease in question, and also the right result in most people who don't have the disease in question. But there's often a problem. It can still happen that, if the test indicates you do have the disease, actually it's considerably more likely that you don't have it.

This isn't just some sort of fluke that arises rarely. It's the position with most disease screening. Good explanations (with excellent graphics) of why that's true are given on the Understanding Uncertainty pages on screening I've linked to above, so I won't repeat that. But, briefly, it arises because the people who "screen positive", i.e. who appear to have the disease according to the screening test, are of two kinds. Some of them really do have the disease, of course. The rest are so-called "false positives" - people who really don't have the disease, but got the wrong result from the screening test. In most screening situations, most of the people coming for screening don't have the disease. So most of those who screen positive are actually false positives.

This misunderstanding is one reason why screening tends to be controversial, but it's not the only one, in my view. Another is that screening involves trade-offs between people's values, and we don't all hold the same values. Would you want to know that you had a serious disease, even if there's no effective treatment for it? Well, maybe you would, maybe you wouldn't. We're all different. Would you want to take a screening test where you might be a false positive, and have to have all kinds of further investigations and even, possibly, potentially harmful treatments for a disease that you haven't actually got? Again, we don't all feel the same about that sort of thing. You might take the view that one is better safe than sorry. Or you might think that all the worry and potential harm isn't worth it for a disease that, more likely than not, you haven't actually got. Since we don't all agree, we won't all hold identical views on whether a screening programme is a good thing, even if we do understand the uncertainty.

To make this all just a bit more concrete, let's look further at the proposal to test everyone for dementia at age 75. Recent reviews are pretty complimentary about the accuracy of some (though not all) of the tests that might be used. (Links to abstracts of two relevant papers are here or here, if you want more details.)

To make the numbers easier, suppose that a particular test gives a positive result in 90% of patients with dementia, and a negative result in 90% of patients that don't have dementia. The Alzheimer's Society estimates that, of people aged between 70 and 79 in the UK, about 1 person in 25 has dementia. So imagine that 1000 people aged 75 turn up for testing for dementia. About 40 of them will actually have dementia, and 90% of those, that's 36, will have a positive test result, that is, the test will indicate that they have dementia. But the other 960 who turn up for testing won't have dementia. Of these, 90% will have a negative test: that's 864 of them. The rest of the 960, which is 96 people, will have a positive test result even though they don't have dementia - they are false positives.

So altogether there are 36 + 96 =132 positive test results, but only 36 of these are true positives, people who really do have dementia. That's not much over a quarter of the positive results. Is it really sensible to test everyone for dementia, knowing that nearly three quarters of the positive test results would be false positives? Think of the resulting worry and anxiety. But it does all depend on your values. You may think it is worthwhile doing the testing. Or you may not.

The report on prostate cancer screening raises similar issues, but it draws attention to another issue as well. It says that "the risk for overdetection and overtreatment in the screening group is considerable". What does that mean?

Well, an issue with several diseases, including prostate cancer, is that if you go and look for people that have the disease, you will find them. More specifically, you will find people with signs of the prostate cancer, that you wouldn't have found if you hadn't done the screening. But isn't that the whole point of screening? Up to a point, yes it is. But among all these cases that you find, there will be people who really do have a cancer, but in whom the cancer is so small and so slow to develop that it will never cause them any real harm. Prostate cancer is very predominanly a disease of older men, and older men may have a cancer that is so slow to develop that they are bound to die of something else before it makes itself apparent. That is, the screening has found disease that is not going to cause a problem. This issue is generally called "overdiagnosis". (There are other aspects of overdiagnosis, but with prostate cancer, this aspect is important.) Overdiagnosis may in turn lead to "overtreatment", that is, treatment of something that isn't really causing a problem and never will. The treatment may be unpleasant, may be painful, and may have adverse effects that are worse than anything that would have occurred if the disease had never been detected by screening.

So that's something else to take on board. Screening will detect some cases of prostate cancer that would not otherwise be detected. Some of these cases really need treating, and will receive treatment that they might not have received if it were not for the screening. In other cases, there will be overtreatment. There is a trade-off, again.

If you're interested in the issues of overdiagnosis and overtreatment, there's an excellent recent book on the subject; "Overdiagnosed: making people sick in the pursuit of health" by Welch, Schwartz and Woloshin. It's a US publication (and not easy to get in the UK, at the time of writing), and discusses the issues in the context of the US health system, but most of the general principles apply in other Western countries as well - and you'll be lucky to find anything that discusses these important issues as clearly.



You could do worse than look at , which also has links to even more information and resources about screening.

Great to see interest in this subject in the UK. I'm a big fan of H Gilbert Welch. In the US, it's going to be an uphill battle to convince the public not to get tested for everything that can be tested, even when one is perfectly healthy and has no symptoms. There's a nice excerpt from Welch's book at