The psychology of risk
Psychology of Risk and the “public perception of risk” research
“A statement such as 'the annual risk from living near a nuclear power plant is equivalent to the risk of riding an extra 3 miles in an automobile' fails to consider how these two technologies differ on the many qualities that people believe to be important. As a result, such statements are likely to produce anger rather than enlightenment and they are not likely to be convincing in the face of criticism” (Slovic, p.271
People's perceptions of risk are not easily represented quantitatively through a probability of coming to harm, or even a more sophisticated calculation balancing that probability and the expected benefits. Things like the nature of the possible death for example play a crucial role when people assess whether to take a particular risk. To take a personal example, through my fear of heights I would rather face the risk of dying in a car crash than a numerically equivalent one of dying in a plane crash – such reactions are perfectly rational, and depend on the personal likes and dislikes of different people. In fact, not even the nature of the harm has to be equivalent: Again speaking personally, I would rather face the very real (but not that high) risk of dying from working in an asbestos lined office from the 60s, than living in a big-scary-spider infested house, which I know perfectly well is actually quite safe (if we discount my increased risk of spider induced heart attacks!).
The psychometric paradigm
Over the last 40 or so years psychologists have studied people's reactions to different kind risks under different circumstances, using methods such as questionnaires to produce quantitatively scaled results. This line of research on the perceptions of risk is often called the “psychometric paradigm” and is associated with the work of psychologists of risk such as Starr, Tverski, Slovic, Kahnemann and Fishhoff. One of the stock research questions that comes up several times in the research of Paul Slovic (ref) is “how safe is safe enough?”, which encapsulates the direction and impetus that this kind of research is often driven by: what levels of risk, and under what circumstances is a risk acceptable for people? When and under what circumstances do people think the benefits of, say, a new technology outweigh its known and potential risks?
Applications for this research are found in “risk management” and other areas where gathering an understanding of how people behave when faced with risk scenarios is vital. Insights in this area are useful for example for insurance companies, who need to understand the kind of risks that people would find they need insurance for, or for policy makers who want to know how to react to risks – what issues surrounding risks do the public find most urgent, which ones do the experts find most urgent, and it even gives valuable insights on how to communicate risks.
Research within this tradition has shown several interesting facets about people's perceptions and attitudes to risks:
Voluntary versus involuntary risks
One of the earlier and groundbreaking (though maybe not too surprising) results which is still quoted liberally today, is from Starr's paper in 1969. In his study Starr found that people are more willing to accept risks that they voluntarily take on, such as skiing or motorcycling, than they would numerically equivalent risks that they have no choice, or not much choice about, such as living near a nuclear power station or having fluoride added to the water supply (this is not to say that these examples are numerically equivalent, or even numerically assessable at all – instead they are standard examples of voluntary and involuntary risks).
The “white male effect”
This effect is so pronounced that, according to Slovic (p.xxxv), it is found in “almost every study of risk perception”, and it is still unclear how it comes about. Generally, males perceive risks to be smaller than females do. White males perceived risks to be less than black males, while among all females the difference is not very great. These effects are not matters of biological differences, and it is important to note that these studies were carried out mostly in the United States where these groups mostly share different socio-economic and often cultural backgrounds, and that the results would be different in other countries. Still, why and how exactly these backgrounds influence risk perception in this way is not well known.
There are ways of coming to harm that people would rather avoid than others because they hold a particular dread for us. My fear of heights makes me much more wary of dying in a plane accident than in a car accident.
What the particularly dreaded risks are changes from person to person, though there are many dread risks that are shared more generally. These can be, again, things like flying, as this is a fairly widespread phobia. They can also be culturally influenced – Nuclear power is often shown in psychometric research as a dread risk (ref to Slovic)
People have a tendency to underestimate the risks they are facing themselves. Even when we know the risks faced by the population in general, we have a tendency not to apply the same risk to ourselves. There may be two reasons for this. One is that we have a certain measure of control over our fate in most risk situations (see also above about the higher acceptability of risks we have control over). When faced with the statistics for car accidents for example we tend to assume our own risks are lower, because we drive the cars and therefore have have control over it – this goes together with the curious fact that most people believe that they are better drivers than average.
But the optimistic bias extends even to situations where we can't control our risks, or only in very limited ways. (example) It has therefore also been interpreted as a kind of denial mechanism of people who are uncomfortable contemplating their own mortality, as we almost all are. (see Joffe 1999 ch. ?? for a discussion of the optimistic bias).
Events that are easier to imagine and which are more immidiately available to us are estimated as more likely to occur than events that we cannot imagine easily or which we don't hear about very often. So for example, we may estimate the probability a heart attack by recalling it happening to an acquaintance, or the failure of a business by recalling reports of similar businesses (both examples are taken from Tversky & Kahneman 1974 p.1127
Risk compensation behaviour
When we are used to certain risks to the extent that they are part of our daily lives, we tend to see any reduction of that risk as a bonus, and therefore allow ourselves to indulge in behaviour that balances out the risks again. This for example happens when I tell myself that "I've ate nothing but healthy food last week, so now to reward myself I'll indulge in lots of cake", or "I didn't smoke at all yesterday, that means I can have a ciggy now".
Risk compensation behaviour is also evident on a larger societal scale where it impacts policy decisions. To take an example frequently cited by Slovic, when dams are build to protect areas at risk of flooding, people tend to build in these areas more than they did previously because they know the area is now safer. However, while the risk of flooding for the whole area has been reduced (it can still happen, but it's less likely), at the same time, if it does happen, far more people will be affected.
Therefore over the whole area, the risks (i.e. probability of flood times financial costs of flood) are fairly similar, whether a protective dam is built or not. Though this equation of the two risks may not make much sense for the individual living in the affected area, it does affect planning at government level and the insurance industry.