In an earlier post, and one of a series by me and subsequently Ken as well, I suggested that an important part of any professional education should be a kind of counter-narrative in which those who learn a profession are also made familiar with that profession’s cognitive biases, with a view to lessening them in practice.
Nice to see this kind of thing is beginning to be taken seriously in management books. Actually it might have been taken seriously before now, I wouldn’t know because I don’t read management books. But I occasionally browse them and it hasn’t seemed to prominent in my browsing. In any event Daniel Kahneman et al have a long article in the HBR on the behavioural economics of business decision making.
And a useful check-list for when an organisation is making big decisions. Viz:
1. Is there any reason to suspect motivated errors, or errors driven by the self-interest of the recommending team?
2. Have the people making the recommendation fallen in love with it?
3. Were there dissenting opinions within the recommending team?
4. Could the diagnosis of the situation be overly influenced by salient analogies?
5. Have credible alternatives been considered?
6. If you had to make this decision again in a year, what information would you want, and can you get more of it now?
7. Do you know where the numbers came from?
8. Can you see a halo effect?
9. Are the people making the recommendation overly attached to past decisions?
10. Is the base case overly optimistic?
11. Is the worst case bad enough?
12. Is the recommending team overly cautious?
Preliminary Questions: Ask yourself
1. Check for Self-interested Biases
Is there any reason to suspect the team making the recommendation of errors motivated by self-interest?
Review the proposal with extra care, especially for overoptimism.
2. Check for the Affect Heuristic
Has the team fallen in love with its proposal?
Rigorously apply all the quality controls on the checklist.
3. Check for Groupthink
Were there dissenting opinions within the team?
Were they explored adequately?
Solicit dissenting views, discreetly if necessary.
Challenge Questions: Ask the recommenders
4. Check for Saliency Bias
Could the diagnosis be overly influenced by an analogy to a memorable success?
Ask for more analogies, and rigorously analyze their similarity to the current situation.
5. Check for Confirmation Bias
Are credible alternatives included along with the recommendation?
Request additional options.
6. Check for Availability Bias
If you had to make this decision again in a year’s time, what information would you want, and can you get more of it now?
Use checklists of the data needed for each kind of decision.
7. Check for Anchoring Bias
Do you know where the numbers came from? Can there be
…unsubstantiated numbers?
…extrapolation from history?
…a motivation to use a certain anchor?
Reanchor with figures generated by other models or benchmarks, and request new analysis.
8. Check for Halo Effect
Is the team assuming that a person, organization, or approach that is successful in one area will be just as successful in another?
Eliminate false inferences, and ask the team to seek additional comparable examples.
9. Check for Sunk-Cost Fallacy, Endowment Effect
Are the recommenders overly attached to a history of past decisions?
Consider the issue as if you were a new CEO.
Evaluation Questions: Ask about the proposal
10. Check for Overconfidence, Planning Fallacy, Optimistic Biases, Competitor Neglect
Is the base case overly optimistic?
Have the team build a case taking an outside view; use war games.
11. Check for Disaster Neglect
Is the worst case bad enough?
Have the team conduct a premortem: Imagine that the worst has happened, and develop a story about the causes.
12. Check for Loss Aversion
Is the recommending team overly cautious?
That’s pretty good. It reminds me of the way I was taught Ethics via Kallman & Grillo.
In my experience, in both Government and private sectors, so much of this rings true – particularly over-optimism and the attachment to past decisions.
This sort of stuff was never raised when I did my MBA in the 1990s, and in all my time has been resented by more senior staff when raised.
Government, however prone to fallacious decision-making, is forced to accept that a multiplicity of views prevails in the community and therefore need to be at least acknowledged in decisions.
Business leaders are far more prone to tunnel vision predicated on commercial biases (understandably enough) and tend to be openly hostile to any attempt to add some intellectual rigour to their analyses.
I see this alot it my office where disenting opinions are treated as a threat to management rather than explored.
My work encouraged us to read about half-a-dozen articles by Kahneman and similar…but I don’t think mpany people take it very seriously :(
This article (pdf) is about organisational and political pathologies which conspire to produce lousy outcomes.
Bent Flyvbjerg, 2009, “Survival of the un?ttest: why the worst infrastructure gets built—and what we can do about it”, Oxford Review of Economic Policy, Volume 25, Number 3, 2009, pp.344–367.
This passage reminds me of the confusion between the public interest and the ‘can do’ official and regulation impact statements.
[…] From a post at Club Troppo called “Counteracting our biases”, which begins. In an earlier post, and one of a series by me and subsequently Ken as well, I suggested that an important part of any professional education should be a kind of counter-narrative in which those who learn a profession are also made familiar with that profession’s cognitive biases, with a view to lessening them in practice. […]
Note to self: This article is also relevant to this issue “How Emotional Intelligence Can Improve Decision-Making”
Here’s another good bit of the countercultural repertoire – promoted by Kanheman
An interesting article on some of this stuff here via HBR.
Reading Francis Bacon it becomes clear that a preeminent part of his own strategy for getting human reason onto a new, more productive footing was throwing off the many ‘idols of the mind’ which he classified into four kinds – From Wikipedia:
Idols of the Tribe (Idola tribus): This is humans’ tendency to perceive more order and regularity in systems than truly exists, and is due to people following their preconceived ideas about things.
Idols of the Cave (Idola specus): This is due to individuals’ personal weaknesses in reasoning due to particular personalities, likes and dislikes.
Idols of the Marketplace (Idola fori): This is due to confusion in the use of language and taking some words in science to have a different meaning than their common usage.
Idols of the Theatre (Idola theatri): This is the following of academic dogma and not asking questions about the world.
As Bacon put it, “the doctrine of Idols is to the interpretation of nature what the doctrine of the refutation of sophisms is to common logic.”
Thaler on this stuff.
[…] why I’ve argued that one of the most useful things one might do as far as ‘teaching’ management or […]