Ken’s last post seeks to crowdsource ideas for teaching law students some of their cognitive biases. I’d been contemplating on posting on something I’d read in Supercrunchers, and this gave me the perfect opportunity.
Good questions Ken. I can’t answer them very satisfactorily but I hope the Tropposphere in its dialectical wisdom comes up with some good ideas for you. One thing your post puts me in mind of is a result I can reference for you if you want. When someone tried to predict the decisions judges on the Supreme Court made using an extremely crude algorithm that asked whether the decision they were hearing on appeal was ‘left’ or ‘right’ leaning, they found that the algorithm predicted more accurately than constitutional experts.
But there was more to it than that. The algorithm picked those leaning right much better than the experts, and those leaning left somewhat worse! As close as you’re gonna get to catching them with their ideological pants down I would have thought.
This led to some argy bargy between me and a couple of conservatives (I hope I’m not offending them – the ‘c’ is small as in Burke and I have the greatest regard for conservatism – and liberalism and social democracy). Anyway, the exchange and the algorithm which predicted judicial outcomes better than those who read the legal arguments are over the fold for your delectation and deliberation.
I don’t know whose pants are supposed to be down, but we black letter lawyers would not be surprised at the finding. Conservatives are more predictable because they are more often correct! Lionel Murphy just made stuff up, Garfield Barwick at least tried to put stuff into historical context. You might like what the lefty judges are saying, but if organising your affairs on the basis of settled law, its best if it stays settled.
Nicholas Gruen said:
Pedro, the algorithm wasn’t a black letter lawyer. It wasn’t trying to think about the law. It was trying to work out what was left wing and what was right wing so it could pick the latter option. Presumably the ‘experts’ were seeking to predict judges decisions based on consistency with their past reasoning, and that of the court.
Nick, you may have misunderstood Pedro’s point, which is that predictability is in and of itself a ‘right-wing’ virtue in the strange world of law. So one would expect right-wing lawyers to be more predictable and left-wing lawyers to be less predictable, without having any basis on which to conclude that one is more biased than the other.
It is not surprising that someone is not necessarily less biased just because they are less predictable, indeed the opposite may be the case.
If your ‘right wing’ favours minimalism, for example, then they will be very predictable, as (for example using that Court) Justice Thomas or Scalia are. If your ideology is the ‘fairness of the case’ which is a terrible but very left-wing ideology, then the algorithm may indeed find you harder to pick, whether because it has a different concept of fairness or perhaps its proxies aren’t sensitive enough. For example Ginsburg herself may not know what is the fair outcome as between two large corporations and she may base her reasoning on entirely un-algorithmic ‘knock-on’ effects for cases which she does feel strongly about the ‘fairness’ of.
So that study may be very valid or complete shite, and it would take a lot of time, and probably a lot of different algorithms, to work out which.
Nicholas Gruen said:
Thanks Patrick, I may have misunderstood Pedro, but I still don’t think you can get him off the hook. The issue here was not predictability but how you predict the outcome. If you want to check out the original article, you may trump me, but the reportage I read (in Supercrunchers by the way) didn’t say that the left leaning judges were less predictable, only that they were less predictable with an algorithm that predicted their decisions based on the assumption that they were politically biased. That is the nub of it, it seems to me.
Patrick’s explanation of the essence of my point is correct. Perhaps the algorithm is different, but I did assume that anyone seeking to identify right wing lawyers would be looking for signs of conservatism.
So now we have the algorithm. It is in some senses ‘complete shite’ as Patrick speculated. But that complete shite, that attempt to just work out where the case is on the ideological spectrum manages to predict the responses of the right leaning judges sufficiently better than the liberal ones that the algorithm does better than domain experts (who one would like to think examine the actual arguments) that it still trumps the experts in predicting cases even though it does worse at predicting the decisions of the liberals.
It seems to me that this is as ‘objective’ a demonstration of the bad faith of the right leaning operatives judges as one could hope to find in the modest resources of the social sciences. Still, it’s possible that when left leaning judges were busily reweigting the jurisprudential scales in Roe v Wade and Brown v Board of Education that similar algorithms would have more accurately predicted liberal judges. And at least with the latter of the two cases above I (think I) support the outcome. On the other hand the findings are also evidence for Paul Krugman’s thesis that the right in the US are not conservatives but revolutionaries, with ‘revolutionary’ implying the lack of any sense of the political legitimacy of your political opponents.
I think this also has some bearing on the strange policy of the current US President who, in the face of constant signs of bad faith from his opponents, endlessly rehearses the idea of bipartisanship. The result is the well documented phenomenon of the President negotiating with himself and the resulting drift of the apparent centre of politics further and further into the world of craziness. Negotiating with a bot is not a good idea – certainly not one programmed as illustrated above.
Anyway, it’s a pretty interesting subject so, rather than derail Ken’s threat, here’s my own. I’d be interested in Pedro’s and Patrick’s response to this on the merits of our discussion, but also on thoughts from further afield.