Civilization and the evolution of short sighted agents, Date: 2008-11-19
By: Basuchoudhary, Atin
Allen, Sam
Siemers, Troy
We model an assurance game played within a population with two types of individuals — short-sighted and foresighted. Foresighted people have a lower discount rate than short sighted people. These phenotypes interact with each other. We define the persistent interaction of foresighted people with other foresighted people as a critical element of civilization while the interaction of short sighted people with other short sighted people as critical to the failure of civilization. We show that whether the short sighted phenotype will be an evolutionary stable strategy (and thus lead to the collapse of civilization) depends on the initial proportion of short sighted people relative to people with foresight as well as their relative discount rates. Further we explore some comparative static results that connect the probability of the game continuing and the relative size of the two discount rates to the likelihood that civilization will collapse.
I have to say I find much of the mathematical modelling that micro-economists do really irritating. The conclusions are usually bleeding obvious – in this case that a sufficient percentage of sociopaths with make civiliation unstable.
What’s the policy implication, I wonder? Should we replace cricket questions in the citizenship test with ones that separate the long-sighted sheep from the short-sighted goats? Or should we include the virtue of long-sightedness in the new national curriculum?
I see a strong confusion in this work between the cooperative / selfish dichotomy and the discount rate (or sort-sighted / foresighted dichotomy). It seems to me perfectly logically consistent to be a selfish long-term thinker or a cooperative short-term thinker.
While there are obvious dangers in short-sighted thinking, there are equally dangers in excessively long-term thinking. Let us suppose some foresighted government back in 1900 planned a huge coal storage and distribution infrastructure and they planned hundreds of years ahead, building silos and laying tracks for the steam engines that they knew would be needed. Then along comes petroleum and rubber tyres and private motor vehicles — last year’s infrastructure is suddenly worthless and the country next door that did nothing to plan for the future finds themselves with a huge advantage. What I’m saying is that you can’t make plans for the unexpected, which rapidly bounds the effectiveness of long-term planning.
A worse problem with long term thinking is powerful people becoming emotionally attached to their plans, determined to see their vision through to the end, regardless of new information. Short-term thinking is automatically adaptive in the face of a changing world.
I also find it difficult to accept that a simple two-option game theory model can deduce the effect of making decisions now that will result in the opportunity (never guaranteed) of payoffs in the future. In a way, the report presumes what it concludes, because the game-theory payoffs are instantaneous, so it just magically gives the long-term strategy a higher payoff. Somehow we all “just know” that’s the most efficient methodology.
As far as the analysis of equilibrium points goes, yeah the maths makes sense. Errr, use more iterative computer modeling, support your local coder. But maths with weak assumptions rapidly becomes useless when applied to the real world (same goes for simulations with weak assumptions for that matter).
Somebody told me the other day that Google has a 200 year business plan. Which on the face of it sounds utterly stupid.
Yes,
That’s the sort of stupid thing people do say. I remember when I was at at school the school nerd was said to think ‘ten moves ahead’ in chess. The nerd wasn’t much chop at chess, let me tell you. If he was thinking ten moves ahead, he had amazingly bad judgement about the positions that would eventuate after the ten moves.
Tel’s point is well taken and one we are wrestling with. One reason why this paper is still a working paper! How about this as an argument though — both our types are actually self interested. They are driven by the payoffs they can get either through cooperation or not. The comment on external shocks that change the planning horizon (rubber tyres get invented etc..) we do not explicitly model — but our paper is not inconsistent with that variation. A shock would change the payoffs without really changing the internal structure of the game or the distinction between long and short sighted types. The change in payoffs may change the nature of the game though — which may lead to a stable polymorphic equilibrium. Which would be cool. Also in our paper both types of players are looking ahead — it is just that one type places a greater value on today compared to the other. So the comparison is not really between people who plan for the future and people who dont (Google’s 200 year plan in our context would be an extremely short sighted approach if true).
As far as policy implications — there are many, from the immigration perspective the point system in Australia and Canada is exactly designed to get self select foresighted people. In the US it is more of a crapshoot but the long drawn out immigration process favors the patient types who are willing to wait (the foresighted people in our model). This model also has application in turning around failed states, and the role of public education in promoting civil society. I am going to ignore Chris Lloyd’s comment because it is rather rude — with the exception that micro economic modeling provides many interesting insights into human behavior. Read Freakonomics or the Undercover Economist. Thanks for all the comments. This is better than some of the comments I get from referees! I look forward to carrying on this conversation.