Nicholas Gruen’s post about Einstellung (a person’s predisposition to solve a given problem in a specific manner even though there are “better” or more appropriate methods of solving it) has given me an idea. I would like to devise a couple of seminars for undergraduate Law students to be delivered as part of the subject Jurisprudence that I am next teaching at CDU in semester 2 2012 (so there’s plenty of time to work out how to do it).
When you consider obstacles to rational thinking and problem-solving like Einstellung, “framing” as enunciated by people like Kahnemann and Tversky, confirmation bias, and Jonathan Haidt’s “social intuition” model of moral decision-making, it’s pretty clear that much of what we usually believe to be genuinely reflective, critical and analytical thinking, both on our own part and by others, is actually much less considered and rational than we might imagine.
I have in mind a couple of seminars that would explain the basics of each of these research approaches to cognitive biases or shortcomings. We would also have students undertake versions of some of the surveys that led to these research findings.
However, what I’m also wondering is whether there are any well accepted practical techniques for diagnosing and correcting suchcognitive biases in ourselves, other than the obvious but difficult one of attempting to adopt a skeptical stance in interrogating one’s own thought processes, especially when dealing with a question likely to arouse strong emotions? And what useful indicators might exist to tell us when to engage in that sort of careful skeptical reflection about our own motives, assumptions and thought processes? Heuristics and habit are unavoidable and useful behaviours. None of us has the time or energy to reflect carefully and skeptically on every decision we make in our daily lives, and in most cases repeating behaviour that worked previously is both efficient and sensible. Are there any reliable guides for picking when that might not be so?
Any hints or observations on how best to run seminars like this would be gratefully received e.g. Patrick noted that his law firm runs training programs where they examine cognitive phenomena like framing.
Most of the literature on cognitive biases is more concerned with building a catalogue of biases and less concerned with fixing them. Partly, this is because fixing them actually turns out to be quite difficult.
A couple of good discussions of current thinking on debiasing (that is, strategies for removing cognitive biases) are Lilienfield et al (2009) and Larrick (2004). Larrick’s piece is particularly interesting as it attempts to match debiasing strategies to causes of biases. It’s also relevant for your question, in a way, because he points out that most education on biases is more about pointing out what he calls “stupid human tricks” than giving people techniques to actually fix them.
The most successful debiasing techniques tend involve considering opposites, or adopting a counterfactual mindset. Motivation and incentives don’t tend to work because they assume people are capable of making the right decision, but that they can’t be bothered. Most of the research shows that even when people are given incentives to make unbiased decisions, they still make biased decisions.
We’re also not very good at debiasing ourselves, because we tend to believe that biases are things that affect other people, not us. So if you give someone the tools to debias themselves, they’ll tend not to use them as they don’t see them being necessary.
You might also be interested in Burke on debiasing prosecutors, however he’s pretty ignorant generally of the ineffectiveness of some of the strategies he’s suggesting. There are a few more articles along these lines, but I’m having trouble bringing them to mind off the top of my head.
Burke, A. (2007). Neutralizing cognitive bias: An invitation to prosecutors. NYU Journal of Law and Liberty, 2, 512-530.
Larrick, R. P. (2004). Debiasing. In D. J. Koehler & N. Harvey (Eds.), Blackwell handbook of judgment and decision making (pp. 316-337). Malden: Blackwell Publishing.
Lilienfeld, S. O., Ammirati, R., & Landfield, K. (2009). Giving debiasing away: Can psychological research on correcting cognitive errors promote human welfare? Perspectives on Psychological Science Vol, 4(4), 390-398. doi: http://dx.doi.org/10.1111/j.1745-6924.2009.01144.x
Good questions Ken. I can’t answer them very satisfactorily but I hope the Tropposphere in its dialectical wisdom comes up with some good ideas for you. One thing your post puts me in mind of is a result I can reference for you if you want. When someone tried to predict the decisions judges on the Supreme Court made using an extremely crude algorithm that asked whether the decision they were hearing on appeal was ‘left’ or ‘right’ leaning, they found that the algorithm predicted more accurately than constitutional experts.
But there was more to it than that. The algorithm picked those leaning right much better than the experts, and those leaning left somewhat worse! As close as you’re gonna get to catching them with their ideological pants down I would have thought.
Another thought – the Catholic Church appointed the devil’s advocate to try to ensure unpopular perspectives were covered. When I was in a meeting at Google with Kaggle, the people we met there commented how remarkable it was that crowds properly constituted came up with such insight and good sense and that committees were so dysfunctional and given to boneheaded stupidity, stubbornness, closed mindedness and all the rest. It occurred to me that that is a good mindset, a good problematic for the senior managers of any organisations to have.
I don’t have any helpful suggestions re. your actual question, but your students should enjoy this clip.
Don’t forget the Dunning-Kruger Effect.
“But there was more to it than that. The algorithm picked those leaning right much better than the experts, and those leaning left somewhat worse! As close as you’re gonna get to catching them with their ideological pants down I would have thought.”
I don’t know whose pants are supposed to be down, but we black letter lawyers would not be surprised at the finding. Conservatives are more predictable because they are more often correct! Lionel Murphy just made stuff up, Garfield Barwick at least tried to put stuff into historical context. You might like what the lefty judges are saying, but if organising your affairs on the basis of settled law, its best if it stays settled.
The big guy in decision making these days in psychology is Gerd Gigerenzer, although trying to differentiate his theory(s) from others is rather difficult. You can see a not very thrilling video of him talking here: http://www.edge.org/video/dsl/gigerenzer.html. His suggestion is to teach statistical thinking. So much for easy solutions!
I also don’t think that the answer to “whether there are any well accepted practical techniques for diagnosing and correcting such cognitive biases in ourselves,” is a very simple — there’s a whole bunch of stuff in different domains. For example, a simple way of understanding bayesian type statistical problems is to think about problems in terms of odds ratios and not probabilities. People are just much better at odds ratios than probabilities as they are supposed to be more ecologically valid. Other things, like how to deal with different types of framing require other and less simple strategies (let alone how to deal with problems with multiple sources, say, like the type of thoughts that problem gamblers have or problems that have some sort of affective content).
For what it’s worth, I have to give the decision making lecture where I work, and if you have “clickers” which allow your audience to vote, it works really well (ours always have flat batteries :( ). You can just put up examples from the taxonomy of decision making errors and get your class to answer some of the examples online, and people inevitably make the same mistakes as documented. You can also use really relevant examples that people like, such as global warming, where you can cheat people easily by changing the baselines by using different years as the “base-line” (so you get different amounts of warming depending on where you start). I always find the Australian is a good source of published dross to get examples from, and students like that because it’s a newspaper.
Pedro, the algorithm wasn’t a black letter lawyer. It wasn’t trying to think about the law. It was trying to work out what was left wing and what was right wing so it could pick the latter option. Presumably the ‘experts’ were seeking to predict judges decisions based on consistency with their past reasoning, and that of the court.
Sorry, if you haven’t found it then Gigerenzer’s older book:
“Simple Heuristics That Make Us Smart” is worth a look for the type of things that people use to solve problems well (there’s also fair bit of stuff on expertise not covered in that book, although I can’t think of a good book on it).
You can peer through some of it at Amazon: http://www.amazon.com/gp/reader/0195143817/ref=sib_dp_pt#reader-link
LessWrong’s How to actually change your mind may be worth looking at (as I should).
Ken, unfortunately, although consistent with some of the above comments, my firm does it the hard way. You read a really long Kahnemann article, fill out a reflection piece, discuss (both) with a facilitator, do role plays with a third party providing instant feedback, discuss and re-iterate.
Nick, you may have misunderstood Pedro’s point, which is that predictability is in and of itself a ‘right-wing’ virtue in the strange world of law. So one would expect right-wing lawyers to be more predictable and left-wing lawyers to be less predictable, without having any basis on which to conclude that one is more biased than the other.
It is not surprising that someone is not necessarily less biased just because they are less predictable, indeed the opposite may be the case.
If your ‘right wing’ favours minimalism, for example, then they will be very predictable, as (for example using that Court) Justice Thomas or Scalia are. If your ideology is the ‘fairness of the case’ which is a terrible but very left-wing ideology, then the algorithm may indeed find you harder to pick, whether because it has a different concept of fairness or perhaps its proxies aren’t sensitive enough. For example Ginsburg herself may not know what is the fair outcome as between two large corporations and she may base her reasoning on entirely un-algorithmic ‘knock-on’ effects for cases which she does feel strongly about the ‘fairness’ of.
So that study may be very valid or complete shite, and it would take a lot of time, and probably a lot of different algorithms, to work out which.
This is from a very long time ago, so I can’t remember any actual references, but there has been a lot of work done on decision support systems identifying biases and how to avoid them in information systems
Yep, HM Treasury UK for optimism bias.
Practical application to project management. A must-know for project managers.
http://www.hm-treasury.gov.uk/d/5%283%29.pdf
Thanks Patrick, I may have misunderstood Pedro, but I still don’t think you can get him off the hook. The issue here was not predictability but how you predict the outcome. If you want to check out the original article, you may trump me, but the reportage I read (in Supercrunchers by the way) didn’t say that the left leaning judges were less predictable, only that they were less predictable with an algorithm that predicted their decisions based on the assumption that they were politically biased. That is the nub of it, it seems to me.
[…] good for anyone, no matter how much conservatism is necessary in a given profession. To that end, Club Troppo’s Ken Parish (who, as you may be aware, is both a very fine lawyer and legal acade…. To my mind, I’ve always thought that part of a good legal education is gaining an […]
I just minutes ago posted this: CBC radio interview with Dr. AJ Hoffman; discussing climate change. (A method I use for notes to myself.
Since 1973 (“How did we justify / rationalize overthrowing a democratically elected government?”) I’ve concerned myself with the processes and dynamics of public policy. A few years ago (late 90s) I studied cog-psych at Dalhousie, reading into the formation of opinions and the place they hold in individual identity. Fascinating stuff, but for me the point remains: what to do about irrationality? How to foster a community of rational agents?
I’m not an academic; I’m a technical communicator. The goal has been (My project started August 1976. 35 years … points for persistence?) and remains: a system that enables public discourse on policy issues. #participatory deliberation #praxis #techne
cheers
@ITGeek | @bentrem
p.s. The White House has released the President’s “long form” birth document. How will the “birthers” respond to mere logic and fact? Not trivial!
Nic @ 2. What is the difference between a “properly constituted crowd” and a committee and how could one make use of them?
Patrick’s explanation of the essence of my point is correct. Perhaps the algorithm is different, but I did assume that anyone seeking to identify right wing lawyers would be looking for signs of conservatism.
Here’s an article about it:
http://seekingalpha.com/article/190103-ian-ayres-s-super-crunchers-statistics-outperform-experts
These articles from Discover Magazine are relevant to the topic Ken:
The science of why we deny science
Is reasoning built for winning arguments rather than finding truth?
I’ve always understood that the best counter to human bias is a tangible checklist. Checklists have been used successfully in surgery to prevent errors and assumptions; possibly those committed to a skeptical line of enquiry could have a similar checklist to avoid reasoning traps. Off the top of my head:
– What is the issue?
– Does thinking about this issue make you angry or passionate?
– Have you sought out a range of counterviews in addition to your own?
– Have you identified the positive arguments in favour of these views?
– Have you engaged these arguments using a rational rather than ad hominem approach?
– Have you presented your conclusions to an uninvolved party for review?
I think that as a general background, you will find Sources of Power: How People Make Decisions to be fascinating reading. Don’t be put off by the title, it’s not some pomo wankfest.
It goes through decision making under a number of conditions and highlights how the rational model of decision-making is almost never applied. Instead, experts generally immediately have a decision in mind and can justify it later.
A really fascinating read.
In the past I’ve also recommended Influence by Robert Cialdini. Cialdini catalogues six psychological phenomena which have remarkable impact on selling anything, from washing powder to the innocence of a defendant.
A common bias is to assume human intent or conspiracy in emergent phenomena. Turtles, Termites and Traffic Jams by Mitchel Resnick is a playful antidote for children and adults.
Stephen’s mention of checklists makes me think of argument mapping. Austhink run an argument mapping consultancy – though I can’t vouch for it. I also know someone in computing who runs ‘dialogue mapping’ which he claims has great effects on the ability of communities to do planning and so on. Given it’s proximity to law it might be worth contacting him.
In a way this is all pretty standard stuff in any decent teaching of clinical therapy work. Despite the sneering of the technicians (Behavioural Therapy CBT etc)the analytical approach based on Freudian notions is the most useful – even if only touched upon in modified form.
It is useful because it focuses on what each party brings to the encounter. Ideas like projection, transference, counter-transference, collusion throw some light upon each transaction and hints at least, of how to be aware of such biases.
Also useful is the basic ideas from the Tavistock school of how groups work or how people operate in groups.
Good for getting some insight into how you as an individual operate with others is the Myers _Briggs tests. Relatively simple and non threatening they are however mostly used badly. They are best used to understand your own style and others style and how these styles influence communication. They are best used in the context of communication not in job interviews.
Basic communications theory is also useful – y’know noise, sender, receiver etc .
None of these alone are about problem solving – but all are to some extent necessary building blocks and are superior to most “Problem Solving” How to books – like De Bono.
I’m not sure if I’m understanding what you are after Ken but one of the things I’d be looking at for a group exercise would be something like this:
Have half the class read a collection of media reports, youtube, papers etc on a controversial / emotional case.
The other half read actual court transcripts – possible difficult if too long – perhaps just a written judgment.
Then have a discussion on the case. Structured around some point. Document this on board or computer slides.
Then have a meta discussion on how the views were formed – not on media bias or reporting – but on how individuals were influenced. Court documents are also written from a point of view.
I hesitate to say this but the good PoMo stuff does lead you directly into questioning how, who and why points of view or decisions are arrived at.
Psychometrically speaking, Meyers-Briggs has about as much validity as astrology. It even has a similar number of combinations (16 instead of 12) and grew in a similar fashion (derived from the swampy origins of what later branched into a science).
The Five Factor Model is “where it’s at” vis-a-vis personality testing, but even then it has poor predictive power. Humans just have a filthy habit of not fitting into neat boxes.
jc – you missed my point – I didn’t claim Meyers-Briggs has much predictive ability – I explicitly said its not for the job interview process – where it is commonly used – I did claim that if two or more people do it and then compare communication styles it is extremely useful for undertaking communication styles and not mumbo jumbo.
Try it yourself – its simple – quick – especially the modified form – free online and useful.
btw – “Psychometrically speaking” in my opinion all the tests are pretty bloody awful and next to useless except as a starting point.
“undertaking communication styles” – no no – I meant “understanding communication styles”
[…] last post seeks to crowdsource ideas for teaching law students some of their cognitive biases. I’d […]
It’s interesting that numerous people on this thread associate Ken’s point with conservatism, when Ken didn’t raise that idea and it’s not immediately apparent that conservative heuristics are more stable than liberal ones.
Paul Monk is senior consultant and co-founder of AusThink. (He’s presently editorial board the Australia Defence Association; see his “Rethinking China: Australia and the World” at Nautilus.org.) While AusThink’s product belongs to the group of IBIS-based systems that persist with the graphical approach (cMAP and Compendium are others), and I abandoned that approach in the late 90s (after experiments using SGI VRML), I have to say that finding Dr. Monk’s writings in that period encouraged me in persisting with my work. (My Ajax-based approach dates from DEC03.)
I can’t encourage anyone to adopt graphical systems for more than brain-storming. (Sadly I have to say that even about Compendium, which is so well developed.) But I encourage everyone in this area generally. (I hesitate to use “structured analysis” since that’s taken a very restricted meaning, and apply my own “participatory deliberation” and “discourse-based decision support system”.)
At the risk of seeming glib: the graphical approach to wicked problems is a “good idea” that quite completely chokes out alternative thinking. HeyHo, and so it goes in the real world.
@ITGeek
p.s. http://www.austhink.com/monk/ has failed to load for years. AusThink is *cough* not responsive to input from non-credentialed practitioners.
[…] Centrist” label I first came across on Club Troppo appeals to me the most. Reading the Rooting out Cognitive Bias post was probably the most signficant trigger for thinking about ideas that I’d had […]
[…] trained to give legal responses (which harks back to the post which sparked Jim’s thoughts, Ken Parish’s post on cognitive biases and lawyers). Perhaps lawyers need to be able to recognise when their own legally trained response is not the […]
[…] an earlier post, and one of a series by me and subsequently Ken as well, I suggested that an important part of any professional education should be a kind of […]