Counteracting our biases

In an earlier post, and one of a series by me and subsequently Ken as well, I suggested that an important part of any professional education should be a kind of counter-narrative in which those who learn a profession are also made familiar with that profession’s cognitive biases, with a view to lessening them in practice.

Nice to see this kind of thing is beginning to be taken seriously in management books. Actually it might have been taken seriously before now, I wouldn’t know because I don’t read management books. But I occasionally browse them and it hasn’t seemed to prominent in my browsing. In any event Daniel Kahneman et al have a long article in the HBR on the behavioural economics of business decision making.

And a useful check-list for when an organisation is making big decisions. Viz:

1. Is there any reason to suspect motivated errors, or errors driven by the self-interest of the recommending team?

2. Have the people making the recommendation fallen in love with it?

3. Were there dissenting opinions within the recommending team?

4. Could the diagnosis of the situation be overly influenced by salient analogies?

5. Have credible alternatives been considered?

6. If you had to make this decision again in a year, what information would you want, and can you get more of it now?

7. Do you know where the numbers came from?

8. Can you see a halo effect?

9. Are the people making the recommendation overly attached to past decisions?

10. Is the base case overly optimistic?

11. Is the worst case bad enough?

12. Is the recommending team overly cautious?

Preliminary Questions: Ask yourself

1. Check for Self-interested Biases

Is there any reason to suspect the team making the recommendation of errors motivated by self-interest?

Review the proposal with extra care, especially for overoptimism.

2. Check for the Affect Heuristic

Has the team fallen in love with its proposal?

Rigorously apply all the quality controls on the checklist.

3. Check for Groupthink

Were there dissenting opinions within the team?

Were they explored adequately?

Solicit dissenting views, discreetly if necessary.

Challenge Questions: Ask the recommenders

4. Check for Saliency Bias

Could the diagnosis be overly influenced by an analogy to a memorable success?

Ask for more analogies, and rigorously analyze their similarity to the current situation.

5. Check for Confirmation Bias

Are credible alternatives included along with the recommendation?

Request additional options.

6. Check for Availability Bias

If you had to make this decision again in a year’s time, what information would you want, and can you get more of it now?

Use checklists of the data needed for each kind of decision.

7. Check for Anchoring Bias

Do you know where the numbers came from? Can there be

…unsubstantiated numbers?

…extrapolation from history?

…a motivation to use a certain anchor?

Reanchor with figures generated by other models or benchmarks, and request new analysis.

8. Check for Halo Effect

Is the team assuming that a person, organization, or approach that is successful in one area will be just as successful in another?

Eliminate false inferences, and ask the team to seek additional comparable examples.

9. Check for Sunk-Cost Fallacy, Endowment Effect

Are the recommenders overly attached to a history of past decisions?

Consider the issue as if you were a new CEO.

Evaluation Questions: Ask about the proposal

10. Check for Overconfidence, Planning Fallacy, Optimistic Biases, Competitor Neglect

Is the base case overly optimistic?

Have the team build a case taking an outside view; use war games.

11. Check for Disaster Neglect

Is the worst case bad enough?

Have the team conduct a premortem: Imagine that the worst has happened, and develop a story about the causes.

12. Check for Loss Aversion

Is the recommending team overly cautious?

This entry was posted in Methodology. Bookmark the permalink.

11 Responses to Counteracting our biases

  1. Senexx says:

    That’s pretty good. It reminds me of the way I was taught Ethics via Kallman & Grillo.

  2. Charles says:

    In my experience, in both Government and private sectors, so much of this rings true – particularly over-optimism and the attachment to past decisions.

    This sort of stuff was never raised when I did my MBA in the 1990s, and in all my time has been resented by more senior staff when raised.

    Government, however prone to fallacious decision-making, is forced to accept that a multiplicity of views prevails in the community and therefore need to be at least acknowledged in decisions.

    Business leaders are far more prone to tunnel vision predicated on commercial biases (understandably enough) and tend to be openly hostile to any attempt to add some intellectual rigour to their analyses.

  3. Benjamin says:

    I see this alot it my office where disenting opinions are treated as a threat to management rather than explored.

  4. Patrick says:

    My work encouraged us to read about half-a-dozen articles by Kahneman and similar…but I don’t think mpany people take it very seriously :(

  5. Nicholas Gruen says:

    This article (pdf) is about organisational and political pathologies which conspire to produce lousy outcomes.

    Bent Flyvbjerg, 2009, “Survival of the un?ttest: why the worst infrastructure gets built—and what we can do about it”, Oxford Review of Economic Policy, Volume 25, Number 3, 2009, pp.344–367.

    This passage reminds me of the confusion between the public interest and the ‘can do’ official and regulation impact statements.

    project managers are part of the problem, not the solution.
    This situation may need some explication, because it may sound to many like an unlikely
    state of affairs. After all, it may be agreed that project managers and other professionals in-
    volved in major infrastructure provision ought to be interested in being accurate and unbiased
    in their work. It is even stated in the Project Management Institute (PMI)’s Code of Ethics and
    Professional Conduct (PMI, 2006, pp. 4, 5) that project managers should ‘provide accurate
    information in a timely manner’ and they must ‘not engage in or condone behaviour that is
    designed to deceive others’. Economists, engineers, planners, and others involved in major
    infrastructure provision have similar codes of conduct. But there is a dark side to their work,
    which is remarkably underexplored in the literature (Flyvbjerg, 1996).
    On the dark side, project managers and planners ‘lie with numbers’. as Wachs (1989) has
    aptly put it. They are busy not with getting forecasts and business cases right and following
    the PMI Code of Ethics but with getting projects funded and built. And accurate forecasts
    are often not an effective means for achieving this objective. Indeed, accurate forecasts may
    be counterproductive, whereas biased forecasts may be effective in competing for funds
    and securing the go-ahead for a project. ‘The most effective planner,’ says Wachs (1989,
    p. 477), ‘is sometimes the one who can cloak advocacy in the guise of scienti?c or technical
    rationality.’ Such advocacy would stand in direct opposition to PMI’s ruling that project
    managers should ‘make decisions and take actions based on the best interests of society’
    (PMI, 2006, p. 2).

  6. Pingback: Checklist to decrease (slightly) the likelihood you’ll fool yourself «

  7. Nicholas Gruen says:

    Note to self: This article is also relevant to this issue “How Emotional Intelligence Can Improve Decision-Making”

  8. Nicholas Gruen says:

    Here’s another good bit of the countercultural repertoire – promoted by Kanheman

  9. Nicholas Gruen says:

    An interesting article on some of this stuff here via HBR.

  10. Nicholas Gruen says:

    Reading Francis Bacon it becomes clear that a preeminent part of his own strategy for getting human reason onto a new, more productive footing was throwing off the many ‘idols of the mind’ which he classified into four kinds – From Wikipedia:

    Idols of the Tribe (Idola tribus): This is humans’ tendency to perceive more order and regularity in systems than truly exists, and is due to people following their preconceived ideas about things.
    Idols of the Cave (Idola specus): This is due to individuals’ personal weaknesses in reasoning due to particular personalities, likes and dislikes.
    Idols of the Marketplace (Idola fori): This is due to confusion in the use of language and taking some words in science to have a different meaning than their common usage.
    Idols of the Theatre (Idola theatri): This is the following of academic dogma and not asking questions about the world.

    As Bacon put it, “the doctrine of Idols is to the interpretation of nature what the doctrine of the refutation of sophisms is to common logic.”

     

  11. Nicholas Gruen says:

    Thaler on this stuff.

Leave a Reply

Your email address will not be published. Required fields are marked *

Notify me of followup comments via e-mail. You can also subscribe without commenting.