Does street lighting really reduce fatal road crashes by 2/3 ?

As of the 23rd May 2022 this website is archived and will receive no further updates.

understandinguncertainty.org was produced by the Winton programme for the public understanding of risk based in the Statistical Laboratory in the University of Cambridge. The aim was to help improve the way that uncertainty and risk are discussed in society, and show how probability and statistics can be both useful and entertaining.

Many of the animations were produced using Flash and will no longer work.

Cochrane Reviews are usually taken as the gold standard in putting the evidence together to check whether a treatment works. But a new Cochrane Review that examines how much the ‘treatment’ of putting in street lights prevents injuries and saves lives seems to suffer from some major flaws which could mean the claimed benefits from street lighting are greatly exaggerated.

The Press release claimed that the Review showed that putting in street lights “reduced total crashes by between 32% and 55% and fatal injury crashes by 77%". This was picked up by the Daily Telegraph which reported that Turning off street lights could put motorists at risk, while the Daily Mail headlined with Switching off street lamps 'could triple road deaths'.

The actual review is much more cautious in its conclusions, but perhaps not cautious enough. It points out that the quality of the studies were poor and most were carried out decades ago when road conditions were different, with one even dating from 1948, 61 years ago. The evidence for the reduction in fatalities was 3 studies which looked at roads in which street lights had been installed, and compared the change in the number of fatal crashes during daylight with the change in fatal crashes during darkness: this comparison should mean that any trend over time is taken care of. Before the street lights were installed there had been 32 fatal crashes during daylight and 64 during darkness: after street lights there were 36 during daylight. If the street lights had no effect we would have expected there to be around 64 x 36/32 = 72 fatal crashes in darkness after the street lights were installed, whereas in fact there were only 25: 35% of what would have been expected had street lights been useless. So we might estimate a 65% reduction (the authors estimate a 67% reduction using a slightly different method – where the press release got their 77% reduction is a mystery).

This sounds a substantial reduction, but can we believe it? The authors acknowledge the poor quality of the studies, but appear to ignore two vital problems that make this estimate unreliable. The first is ‘publication bias’: if the street lights had had little or no effect, would the authors have gone to the effort of writing up their work and submitting it for publication? We can suspect that only strongly positive studies would have been published, especially decades ago.

The second point is more subtle and involves the concept known as ‘regression-to-the-mean’. Why were these particular roads chosen to have street lights installed? It is extremely likely that these stretches of roads had recently seen high rates of accidents. Accidents follow an erratic random pattern, with runs of highs and lows, and if roads are chosen because of a ‘high-blip’, it is likely that such a run of bad luck will come to an end and the accident rates will return to a more average figure. In other words we can expect the rates to improve by chance alone, whether or not street lights were installed. That’s why, if we really want to check if some new intervention works, we have to decide at random who gets it and who doesn’t.

This means that quoted benefits are likely to be considerable over-estimates, and to its great credit this was pointed out by the Daily Mail. it is a shame that this Review does not seem to have met the usual Cochrane standards, although it is not helped by an over-enthusiastic press release.

Levels: 

Comments

Young drivers and riders, particularly males, are at higher risk for crash involvement. Teenage drivers run the greatest risk of any age group, particularly within the first year after receiving a full license. Men, especially young men, are more likely than women to be in a road crash.

Age groups of older drivers exclude the drivers of that age group who were killed driving when they were younger.

Are there any statistics on older people who have just learnt to drive? For example, are drivers of any age "at higher risk for crash involvement" within the first year of getting a full licence? Is experience a contributory factor I wonder? And could it be said that generally speaking there are more younger drivers who have just received a full driving licence than older ones, so that age group would be over-represented in first year crashing statistics?

I have read countless reports of lighting doing this or that or not doing this or that. All these reports are fundamentally flawed by referring to the generic concept "lighting". The reality is that there can be a world of difference between one lighting scheme and the next. e.g. how is the lighting evaluated by the researcher? Does it comply to the CORRECT standard? Is it a good scheme? Secondly the "regression to the mean" argument really does not stack up. Yes there can be ups and downs in accident stats, however, intelligent site assessment (often undertaken because there have been higher than average accidents) may reveal that the particular site IS of above normal danger of accidents. To then NOT light it because of fear that "regression to the mean" might make that lighting irrelevant would, in my opinion, be grossly negligent. Furthermore to install street lighting where it was not wanted or needed just to prove the point would also be criminally insane from a light pollution perspective.

If the purpose is to have a scientifically conducted trial to validate the safety aspect of street lighting how else can that be done except by choosing sites at random? In drug trials half the trial population is given the active ingredient and half a placebo. The two populations are chosen at random so it is unknown which do or do not have the illness. That means some with the disease which the active ingredient under test might cure do not get it and so continue to suffer or die. Is this criminally insane? If all other influences and biases are not excluded how can we be sure of the effectiveness of a drug or safety measure? Without randomising, a "trial" could show rubber ducks placed along a road decreased accidents. So let's line all roads with rubber ducks.

Reducing the fatal road crashes by 2/3 is a little bit too optimistic in my opinion even if we talk about street lighting. There are many other factors that account for street crashes. If I were to come with a strategy for reducing the crashes I would probably start with some well positioned custom made signs, good lighting and end better police control on the roads.

Police controlling busy traffic are the best way to reduce fatal crashes. The only downside is, when you get caught for doing the no-no's e.g. exceeding speed limit, or talking on the phone not hands free they might be asking you for 'something else' indirectly to avoid from giving you tickets. Maybe this article should help the readers to reduce car accidents if not by much: Young Killed

The main factor for car accidents is excessive speed, if they really want to reduce car crashes they should find a way to reduce the number of drivers who are irresponsible and drive faster than the limit. Some more radars an policemen out on the streets looking for such drivers would really make the numbers drop. Auto Insurance

When I lived in Canada, it was said that the RCMP ALWAYS put excessive speed as a factor in an accident......

If the purpose is to have a scientifically conducted trial to validate the safety aspect of street lighting how else can that be done except by choosing sites at random? In drug trials half the trial population is given the active ingredient and half a placebo. The two populations are chosen at random so it is unknown which do or do not have the illness. That means some with the disease which the active ingredient under test might cure do not get it and so continue to suffer or die. Is this criminally insane? If all other influences and biases are not excluded how can we be sure of the effectiveness of a drug or safety measure? Without randomising, a "trial" could show rubber ducks placed along a road decreased accidents. So let's line all roads with rubber ducks. seminyak villas

people who have just learnt to drive? For example, are drivers of any age "at higher risk for crash involvement" within the first year of getting a full licence? Is experience a contributory factor I wonder? And could it be said that generally pure dmaa

Street lighting could possibly prevent road accidents to happen, but not exactly 2/3 of them. I mean, there are far more factors to a road mishap other than poor street lighting. There's drunk driving, loose breaks and the likes. We don't have to read between the lines of academic dissertations to have proof that we just have to be responsible on the road to avoid accidents, right?