As one the best illustrations of the way our minds deal with uncertainty, consider the following video. Please listen and watch at least 30 seconds so you can experience the three sequences of spoken words.
Pretty much all humans who watch the video will first hear “ba ba” in the first sequence, then “da da” in the second sequence, and “va va” in the third sequence. Yet if you play the video but close your eyes, everyone will hear only “ba ba” in any of the sequences. Hearing “da da” or “va va” is an illusion caused by the fact that the video shows the lips moving as if “da da” or “va va” is said, but the actual sound is “ba ba” throughout.
It is a wonderful example of how the human subconscious recognises uncertainty and resolves it without the conscious being involved at all: in the second sequence the auditory part of the brain deduces that what is said is “ba ba”, whilst the visual part of the brain deduces that what is said starts with a “d”. These conflicting pieces of information are then combined such that the visual information dominates and the conscious is told the sound is “da da”. The conscious brain is not even alerted to the uncertainty as the sound “da da” is relayed in real time with no hesitation.
What makes the example especially interesting is that conscious awareness of what is going on does not change what the brain tells us about the sound: you can watch the video thousands of times and try and train yourself to hear “ba ba” all the time, but even scientists who studied the illusion for decades still hear “da da” when seeing the mouth move as if a “d” is uttered. The subconscious resolves the uncertainty in the same way regardless of how the conscious mind tries to direct it.
This “McGurk effect” is also a case of where more information actually leads to the wrong perception. Normally, visual information adds to auditory information to improve the processing of spoken language, but on this occasion more information leads to a conflict between information at which point the “correct” information gets disregarded.
It turns out that our subconscious does something similar with everything we see and hear (or sense in any other way): uncertainty about what is sensed is resolved many thousands of times every second to produce a sensation of certainty around what is going on. I am at this very moment hence “seeing” a room with chairs, a bench, a clock, a labtop, a tv, etc. There is no hint at all of uncertainty, such as whether I am seeing a pen or a chopstick, a chair leg or a lamppost, the side of a window glass or the side of a piece of paper stuck on the window glass: my subconscious simply paints a picture of what I am seeing, with no role for uncertainty whatsoever.
The pretense of certainty in what is seen and heard thus occurs entirely automatically and is the case for every normal human: we have a brain equipped to deduce certainty, not to live with radical uncertainty on everything we see and hear. What is amazing is that that ‘certain’ view emerges even though in actuality my eyes scan but a very tiny amount of the true visual field: nearly everything I think I see is made up from extrapolation, involving lots of little uncertainties. The brain thus deduces something is “a table” from a few glimpses, combined with the expectation that there is a table, culminating in a picture in our minds full of details that are not really seen at all (like the contours of the whole thing).
There seems an obvious reason for this penchant of our subconscious to present our conscious mind with the pretense of certainty: it allows for quick decision making without distractions. I don’t need to spend energy on the tiny probability that the house cat is actually a devouring tiger and can concentrate on my typing because my subconscious mind rejects out of hand that the house cat is a devouring tiger. My conscious mind can tell me there might be an escaped tiger from the zoo nearby (which then creates a hint of anxiety) but in the normal cause of events such thoughts never enter the mind for the excellent reason that it wastes time and distracts from what we are doing.
Our self-pretense of certainty is thus evolutionary efficient, a ‘hack’ you might say: by simply not even alerting our consciousness of the thousand and one uncertainties, our conscious mind is kept in reserve for more rewarding problems to think about. Being aware of uncertainty slows our decision making down immensely and thus needs to be particularly rewarding to even contemplate.
Let me give two more example of how our mind “fills in the blanks” in a way that is efficient but strictly speaking totally wrong. Read the following sentence as quick as you can:
“Arocdnicg to rsceearch at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer are in the rghit pcale.”
Now, with this ‘illusion’ it turns out not everyone reacts the same, unlike the McGurk effect above which pretty much 100% succumbs to. Still, the majority, me included, will be able to read the sentence above very quickly and ‘see’ that the sentence is meant to say
“According to a research at Cambridge University, it doesn’t matter in what order the letters in a word are, the only important thing is that the first and last letter be at the right place.”
This example is illustrative of the fact that the majority of people do not truly read every letter of a word but essentially guess at the order of the letters in the word on the basis of what the mind expects to read, helped with some information on the more important parts of the word (like the start and the finish). Our minds simply fill in lots of the blanks, partly on the basis of the overall shape of the word (ie we do not truly look hard at the individual letters either but make them up from more limited information too), and even because of what we expect to read in that part of a sentence. So after deducing in the sentence above that we read “Cambridge”, any subsequent word starting with “U” is going to almost automatically be guessed to be “University”.
Once again, filling in the blanks on the basis of deductions so far is efficient, even though the deductions are strictly speaking untrue. Our minds are not truly reading “Cambridge University” when the letters are “Cmabrigde Uinervtisy”. Yet, once again, our consciousness is not told to worry about the actual information even though in this case our conscious mind does get alerted that something is not quite right, which is probably because a bit of higher-order reasoning is needed to unscramble the words quickly. Yet, since there is no obvious alternative candidate way in which the sentence was “supposed” to be read, our conscious mind is not told there is a problem and is kept in a state of blissful certainty: we are not motivated to ‘check’ whether there is another possible sentence hidden in the same letters. All this happens at pretty high speed (I read the sentence above in 3-5 seconds).
A third example goes into the question of how our minds deal with uncertainty about the future. Consider the following survey question on which many macro-economic forecasts are based:
“Do you think your business will expand or contract the next 3 months?”
Note how this question only allows answers to be a definitive expectation of “expand” or “contract”. To a purist statistician this question is nonsense because one cannot know what is going to happen the next 3 months and one thus “should” have a “probability distribution” in mind on all the events that might cause the business to expand and contract. A statistician would thus like to ask business leaders questions like
“what probability do you assign to the event that your business will expand 10% or more in the next three months”.
The problem with such probabilistic questions is that many respondents will not be able to answer. Most people do not think in terms of probabilities of future events, let alone probabilities of broad categories like “10% or more”: it takes enormous training to think that way about something. That’s why the ‘incorrect’ version of questions about the future dominate surveys.
The question on business expansion thus divides neatly into two scenarios: up or down, and simply asks people to say which one they believe holds. This is how people think about the future: in terms of storylines, ie scenarios. Most of us do not really think in terms of uncertainties but more in terms of competing “scenarios”, where everything within each scenario is taken to be certain. A statistician would call a single scenario a “point estimate”. So via allowing multiple scenarios people can grasp a few point estimates. To a purist statistician any exact scenario occurs with a probability of zero and is thus useless, yet in politics and business, having several scenarios to mull over is about as sophisticated as one can talk about the future.
The general insight is hence that our very thought processes are set up to resolve uncertainty such that the conscious mind deals mainly in total certainties, even when considering the future. That appears to be an evolutionary optimal strategy: allowing more than a tiny bit of uncertainty about the present or more than a few scenarios for the future just distracts us too much and paralyses our ability to take action. In order to be decisive, we are set up to be extremely bad at openly thinking in terms of uncertainty.
This is also how we should think of uncertainty-resolution in social groups: as something that usually is efficient and necessary to be able to decide quickly. No politician can afford to sound uncertain, let alone say something as scientific as “With my policies there is a 5% higher chance of economic growth”. Group leaders must exude certainty lest they be seen as weak and not leaders at all. The inability of individuals and groups to live with much uncertainty is then not a sign of how backwards they are, but a ‘hack’ that allows faster decision making. This is even more true in a crisis: the more that humans sense quick decisions need to be taken, the less able they as individuals or group can allow for the possibility of uncertainty. They need ‘to run’ with whatever presents itself as the certainty of that moment.
A major question for large groups is then how to avoid the trap of having the group as a whole become totally certain about something that just isn’t so. In terms of the McGurk effect, the question is how the group as a whole can have mechanisms to ensure that the “truth” about the sound is recognised by some people, combined with some mechanism to convince the others despite conflicting information. One obvious answer is that you want some people in the group as a whole that only listen (and do not use their eyes), for they will then hear something totally different in the McGurk video to what those who see as well as listen think they hear. So one answer to the trap of certainty is organised radical diversity in perception.
This and many other group considerations around uncertainty are for a future part though. They involve the issue of how emotions lead to the search for agency stories in situations of uncertainty. They involve the issue of how a leader must respond to the demand to exude certainty and control. They involve mechanisms like independence, public inquiries, and ‘conscience’ that very large groups develop to maintain and reward diversity of perspective. They involve the impossibility of representative leaders to openly maintain perspective if very strong group emotions get involved. The bigger idea in the background is thus that our societies need and already have lots of ‘counter-hacks’ to limit the damage of the many “usually efficient hacks” we as individuals and social groups have.