In an earlier post, and one of a series by me and subsequently Ken as well, I suggested that an important part of any professional education should be a kind of counter-narrative in which those who learn a profession are also made familiar with that profession’s cognitive biases, with a view to lessening them in practice.
Nice to see this kind of thing is beginning to be taken seriously in management books. Actually it might have been taken seriously before now, I wouldn’t know because I don’t read management books. But I occasionally browse them and it hasn’t seemed to prominent in my browsing. In any event Daniel Kahneman et al have a long article in the HBR on the behavioural economics of business decision making.
And a useful check-list for when an organisation is making big decisions. Viz:
1. Is there any reason to suspect motivated errors, or errors driven by the self-interest of the recommending team?
2. Have the people making the recommendation fallen in love with it?
3. Were there dissenting opinions within the recommending team?
4. Could the diagnosis of the situation be overly influenced by salient analogies?
5. Have credible alternatives been considered?
6. If you had to make this decision again in a year, what information would you want, and can you get more of it now?
7. Do you know where the numbers came from?
8. Can you see a halo effect?
9. Are the people making the recommendation overly attached to past decisions?
10. Is the base case overly optimistic?
11. Is the worst case bad enough?
12. Is the recommending team overly cautious?
Preliminary Questions: Ask yourself
1. Check for Self-interested Biases
Is there any reason to suspect the team making the recommendation of errors motivated by self-interest?
Review the proposal with extra care, especially for overoptimism.
2. Check for the Affect Heuristic
Has the team fallen in love with its proposal?
Rigorously apply all the quality controls on the checklist.
3. Check for Groupthink
Were there dissenting opinions within the team?
Were they explored adequately?
Solicit dissenting views, discreetly if necessary.
Challenge Questions: Ask the recommenders
4. Check for Saliency Bias
Could the diagnosis be overly influenced by an analogy to a memorable success?
Ask for more analogies, and rigorously analyze their similarity to the current situation.
5. Check for Confirmation Bias
Are credible alternatives included along with the recommendation?
Request additional options.
6. Check for Availability Bias
If you had to make this decision again in a year’s time, what information would you want, and can you get more of it now?
Use checklists of the data needed for each kind of decision.
7. Check for Anchoring Bias
Do you know where the numbers came from? Can there be
…extrapolation from history?
…a motivation to use a certain anchor?
Reanchor with figures generated by other models or benchmarks, and request new analysis.
8. Check for Halo Effect
Is the team assuming that a person, organization, or approach that is successful in one area will be just as successful in another?
Eliminate false inferences, and ask the team to seek additional comparable examples.
9. Check for Sunk-Cost Fallacy, Endowment Effect
Are the recommenders overly attached to a history of past decisions?
Consider the issue as if you were a new CEO.
Evaluation Questions: Ask about the proposal
10. Check for Overconfidence, Planning Fallacy, Optimistic Biases, Competitor Neglect
Is the base case overly optimistic?
Have the team build a case taking an outside view; use war games.
11. Check for Disaster Neglect
Is the worst case bad enough?
Have the team conduct a premortem: Imagine that the worst has happened, and develop a story about the causes.
12. Check for Loss Aversion
Is the recommending team overly cautious?