mtvessel: (Default)
mtvessel ([personal profile] mtvessel) wrote2015-05-31 05:43 pm
Entry tags:

Rules of Thumb

Aug 2014
Risk Savvy: How to Make Good Decisions - Gerd Gigerenzer - Penguin, 2014 (Kindle edition)
* * * *
I bought this book because Gigerenzer is one of the few people to criticise Daniel Kahneman's overrated theory of fast and slow thinking. He particularly objects to the assumption that fast, instinctive estimates (Kahneman's system 1) are inherently more error-prone than considered thought (system 2). Not so, says Gigerenzer - the two systems are not separate and we can and do use both together. He points out that quite simple heuristics - rules of thumb - of the sort used in system 1 thinking can often be more effective in finding workable solutions than considered thought; as he puts it, "it is often better to be roughly right than precisely wrong". He illustrates the effectiveness of heuristics in a wide range of fields from banking to healthcare to aircraft safety, and while he clearly has a few bees in his bonnet, I can't help thinking that he is right.


One of the reasons I sympathise with Gigerenzer rather than Kahneman is that his solution to the problem of our cognitive biases causing us to make bad decisions is more optimistic. There is a whiff of fatalism in Kahneman's model - our systems of thinking, he implies, are hard-wired into us and there is little that we can do about their inadequacies. This has led to the development of Nudge Theory, in which a soft-paternal organisation (presumably staffed by people who are particularly good at system 2 thinking) works with our in-built cognitive biases to guide us into compliance with perceived social norms.

But there is a fundamental problem with system 2 thinking as a problem-solving approach which Kahneman does not address. It can only give the right answer if all the relevant factors are known and can be calculated, or at least reasonably estimated. And for many problems in real life - far more than government, media and institutional communications would have us believe - this is not possible. Gigerenzer explains this by (re-)defining a risk as a factor whose probability can be calculated and an uncertainty as one that cannot. The absence of convincing models for decision-making in many areas of life - finance, romance, natural disasters and health, for example - implies that they must have uncertainties - "unknown unknowns" - that make risk estimation impossible.

So what can be done about problems with a large number of uncertainties? Gigerenzer proposes a programme of citizen education in psychology and statistics that alerts them to our biases, particularly in the area of risk assessment. Risk-savvy citizens can then reliably use gut feel and rules of thumb to make sensible decisions for themselves.

This education needs to extend to the communications strategies of journalists and experts who make statements of probability to the public. For example, in 1995 the UK Committee on the Safety of Medicines issued a letter to doctors warning that third-generation contraceptive pills increased the risk of thrombosis by 100% compared to second generation ones. Oral contraception usage subsequently decreased by some 13% in teenagers between 1995 and 1996, resulting in an estimated 26,000 extra conceptions, 13,000 extra abortions and a bill to the NHS of approximately £67 million.

So were people being stupid? Not on the face of it - a doubling of risk sounds scary. But the absolute incidence of thrombosis amongst women using second generation pills was 1 in 7000, and with the third generation pill it increased to 2 in 7000. If the figures had been presented in this way, much of the fear factor - and the resulting misery of unplanned pregancies and abortions - could have been avoided. Gigerenzer's rule is simple - always express risk changes in absolute and not relative terms.

He also points out the inherent ambiguity in the statement "there is a 30% chance of rain in Oxfordshire tomorrow". Does it mean that it will rain for 30% of tomorrow, or that it will rain in 30% of Oxfordshire, or that it will rain on 30% of the days for which the announcement is made, or even that 30% of meteorologists believe that it will rain tomorrow? You may think that the answer is obvious, but studies show that different people will make different assumptions about the reference class if the expert making the prediction does not specify it (interestingly, there appears to be some cultural specificity about the reference class that people choose). Again, a simple rule - always express a probability as a frequency of a reference class rather than as the likelihood of a single event - avoids potentially catastrophic misunderstandings of statistics.

Gigerenzer does not deny the existence of cognitive biases and implicitly accepts that education is not a cure-all. He points out that the 9/11 terrorists killed 1500 more people than reported by causing a preference for driving rather than flying among US citizens for a year or so after the attack, resulting in an increase in fatal road accidents. The unreasonable emotional prioritisation given to "dread risks" of this sort - low-probability events like 9/11 that kill many people at one point in time - is a classic Kahneman cognitive bias and there is no evidence that you can train yourself out of it. However, it can be countered with a conflicting strong emotion, for example pointing out the increased risk to your children of additional miles of driving.

His support for gut feeling rather than considered logic for decision-making seems harder to justify, but he makes a good case. The miraculous survival of US Airways Flight 1549, which ditched safely in the Hudson following a bird strike that took out both engines, was of course nothing of the sort. The aviation industry is famous for its checklists for every conceivable situation, but in this case there was no time to follow them. Instead the pilot, Captain Chesley B. "Sully" Sullenberger, followed a simple heuristic to decide what to do. He fixed his gaze on the Le Guardia airport control tower and determined that it was rising with respect to his windshield rather than falling. This meant that he knew that the plane would crash before he reached it.

Such heuristics can be used either consciously or subconsciously - if the latter, they manifest as a gut feeling. This, says Gigerenzer, is why Kahneman is wrong to say that system 2-style conscious thinking is inherently superior to system 1-style instinctive thinking. They can use the same rules of thumb and so are equivalent.

It is also a mistake to say that a system that makes errors is inferior to a system that does not. Kahneman and his followers draw an analogy between visual and cognitive illusions. Just as we can't help switching between a duck and a rabbit in the famous picture, they say, so we cannot help making mistakes in thinking, so education is useless and the only solution is paternalistic nudge strategies. But Gigerenzer points out that the purpose of our brains is not to perceive without error, which is not possible, but to make educated guesses about what's out there based on incomplete information. So long as we learn from them (and they don't kill us), errors are good. In this view, cognitive biases like anchoring and "what you see is all there is" are symptoms of poor education and a (conscious) refusal to learn from our mistakes rather than an inherent human characteristic.

But is there any evidence that learning and training actually work to modify instinctive snap judgements? I don't think that Gigerenzer is sufficiently rigorous in addressing this fundamental question. The US Airways example is strongly suggestive and it is hard to explain the feats of tennis players and cricketers in any other way, but there are extraordinarily few studies that have looked at training as a more general form of cognitive bias mitigation, and those that exist have shown only a moderate effect. However, I doubt that all the possible training techniques have been investigated. Why Gigerenzer, Kahneman and other cognitive psychologists are not making a major effort to do a systematic study I don't know. Presumably they can’t get the funding.

This is a shame, because the discovery of a technique for training our subsconscious minds to mitigate cognitive biases would have huge consequences, both for societal governance and for our understanding of ourselves as human beings. I have argued before that free will derives from the interplay between our conscious and subconscious cognition - that our conscious mind can "train" our subconscious, just as our subconscious influences our conscious behaviour through our unthinking decisions and the thoughts that pop into our heads - and this view would be strongly supported if such a technique could be found. If it can't and our subconscious cannot be trained, then our sense of free will must surely be an illusion and our self-awareness merely a froth on a deep ocean of unconscious processing, biases and all, over which we can exert no conscious influence. Which is nice if you don’t want to take moral responsibility for your actions, but not so good for civilised society.

Still, I respect scientific evidence, however uncomfortable it may be. And Gigerenzer has certainly not proved his case, any more than Kahneman has proved his. So I shall be watching the tests of their opposing models with great interest. But I really hope that Gigerenzer is right.