Daniel Kahneman is the Eugene Higgins Professor of Psychology Emeritus at Princeton University and Emeritus Professor of Public Affairs at the Woodrow Wilson School of Public and International Affairs. He was awarded the Nobel prize in Economics in 2002 for his pioneering work with fellow Israeli born Amos Tversky on decision-making and uncertainty. Kahneman is also the author, ,of the best selling “Thinking Fast and Slow” (2011). Both Kahneman and Tversky advanced the discipline of Behavioural Psychology immeasurably, but the world has been slow to work out how their insights might be used to improve decision making , particularly in public policy.
Their joint research looked at how we humans make decisions, how we make choices (we are supposed to be rational) and how we rate probabilities, along with our ability to predict outcomes . Using research and extensive sampling from behavioural psychologists and economists they found that although quite often we make the right decisions , in other words they are demonstrably in our interests, it can be for the wrong reasons, and indeed we are all susceptible, in a systematic way, to making mistakes because of the way our brains, or minds, work. Our decision-making is subject to a number of biases ‘cues’ and preconceptions, of which we are mostly unaware. These biases often occur as a result of holding onto one’s preferences and beliefs regardless of contrary information. Social pressures, individual motivations, emotions, the way we tap our short term memories and limits on the mind’s ability to process information can all contribute to these biases.
The motivation of these psychologists was that if we know why we make errors of judgment, then we can try and do something about it. Which could have a profound effect on the way we manage our daily lives and in a broader way how our public services are delivered. In short we could improve decision-making,and might be able to spot where human judgment goes wrong. And maybe if we could figure this out, we might be able to close the gap between the expert and algorithms
Kahneman and Tversky demonstrate the ways in which human minds err systematically when forced to make judgments about uncertain situations, and we are all, of course ,daily presented with uncertain situations.
In such an uncertain world we understandably turn to ‘experts’. But, it transpires, they are also subject to big errors of judgment.
Looking to the medical profession, Professor Paul J Hoffman, in his research as far back as 1960 (The Paramorphic Representation of Clinical Judgment), looked at the way medical experts, in this case radiologists, diagnosed whether patients had stomach cancer from X- rays. In some walks of human life there is a lack of sufficient data to build algorithms that might replace the human judge, but medicine is not necessarily one of them . Hoffman wanted to find out how radiologists reached their judgments. He set out to create a model of what these experts were doing when they formed their judgments. So, Hoffman identified the various inputs that experts used to make their decisions. The radiologists said there were seven major signs that they looked for to identify whether a stomach ulcer was cancerous. For example, its size, the shape of its borders, the depth of the crater etc. A simple algorithm was created looking at the seven factors equally weighted.. The researchers then asked the doctors to judge the probability of cancer on a seven point scale from ‘definitely malignant’ to ‘definitely benign’. Unbeknownst to the doctors, they presented the 92 x rays of different ulcers ,in random order, with each x ray presented twice.
The results were, in a certain sense, terrifying.
Although doctors thought the processes they followed to make their judgments were complex and, of course ,informed by experience this simple model captured them well. Their diagnoses were in fact all over the shop. When presented with duplicates of the same ulcer every doctor contradicted himself and rendered more than one diagnosis. The doctors apparently could not even agree with themselves. A similar experiment with clinical psychologists and psychiatrists asking them to predict whether it was safe to release a patient from a psychiatric hospital found that those with the least training who had just graduated were just as accurate as the fully trained experienced practitioners.
The lesson drawn from the x- ray test was that a simple algorithm had outperformed not merely the group of doctors but it had outperformed even the best individual doctor. So you could beat the doctor by replacing him with an equation created by people who knew nothing about medicine and had simply asked a few questions of doctors.(remember this was 1960!)
There is now quite a lot of research out there that tells us about how often we make misjudgments, although given good information, on the effectiveness of algorithms (man, versus man made model) and the growing impact, and potential impact of Artificial Intelligence (which is rapidly rising up the political agenda) but we seem to have been remarkably slow at putting this knowledge to good use , particularly in the field of Education and Learning. Hopefully, this will change soon.
It is pretty clear that psychological issues are relevant to policy formulation and implementation and in the design of ‘choice ‘architecture . You cannot assume that all individuals, acting for themselves or as economic agents, are completely rational. Most of the time, as Kahneman points out, we can trust intuition, and indeed we do. He draws the distinction between fast thinking and slow thinking, and our lives are mostly run on fast thinking, which normally does us very well. But , there are situations where people would do better by slowing down and where they need more than a little help. And experts judgment can be fatally wrong. Don’t just think of medicine here , think of the financial crash of 2007/8 and other sectors .-one might also look at a few flawed experiments in education policy as education ministers are as subject to biases (and cherry picking evidence) as the next person.
. Kahneman says “ We haven’t yet found the right model to look at decision-making under fear, how people react when the world feels dangerous and uncertain.” So the work is on-going but there is infinite scope for making better use of man-made models and exploiting Artificial Intelligence within a secure regulatory framework.
See also, The Undoing Project –A Friendship that Changed the World, Allen- Lane 2017 ;by Michael Lewis (which describes the context of behavioural psychology research, and the relationship between Kahneman and Tversky)