The probability of a massive nuclear war the next 10 years between any of the 8 current nuclear powers (US, UK, France, Russia, India, Pakistan, NK, Israel) seems low. The bluster of the leaders is supposed to make the threat look a bit bigger than it is in order to get negotiation advantages, but bluster is usually just bluster without consequences because no elite wants to destroy itself or the population it feeds off.
Yet, whilst the threat may be minimal in a 10 year period, it is bigger if we consider a 100 year period. And what about the next 10,000 years, a mere blip when you consider the age of the world? I would guess the odds are about 50-50% that there is no major nuclear war the next 100 years, but I have to say that with our current technology, the odds of a major nuclear conflict the next 10,000 years is close to 100%. Somewhere, somehow, at current trajectory there will be a major f-up leading to a huge conflict that kills the vast majority of us.
Just think of how close humanity has already been the last century. Starting from 1900 AD, we have had 2 world wars, a near-nuclear war (the Cuban missile crisis), several periods of heightened nuclear-threats between two nuclear powers (India/Pakistan, Israel/Iran, NK/US), and rapid further arms development such that more nuclear destruction can be delivered quicker and further than before.
Other devastating technology has also emerged, including biological weapons and automated systems. In the near future we can expect automated weapons systems run by artificial intelligence that will make moves faster than humans can think in order to counter threats by artificial systems that move faster than humans can think. Most of us might be killed in a second for reasons only known to AI systems that perish themselves in the conflicts they start. Not a happy thought.
The odds that human conflict will go seriously wrong seem near-certain in the long run and there doesn’t seem all that much we can do about this either: if the current enemy builds something that can destroy us in 0.0001 seconds, we too must have something that can destroy them in 0.0001 seconds, complete with detection systems that can be fooled faster than any human can correct them.
Also, if we can get a small benefit from appearing truly ready and capable of destroying them unless we get our way, we are certain to do so time and time again. Leaders who gain from playing chicken with our mutual destruction get kudos (just think of JFK!). That is the nature of humanity and of human conflict: we look for small advantages in the here and now, and we reward our leaders for it. We simply do not live only to serve the long-run health of our species. Enough of us live to get the most out of our own short lives to ensure that at least some countries will sometimes have rulers ready to play nuclear-chicken with other.
Humanity as a whole can only live on the edge so long though. Our luck will run out. Or not?
From the point of view of the species, 10,000 years is a mere blip in our evolution, less than 5% of the time it is now believed we humans have been around. It is odd to think that we spent the first 99% of our time on the planet in near perpetual war with other hunter-gatherers, and that the brief period of relative peace we are currently enjoying is likely to end with massive wars that wipe out most of us.
A massive war is not necessarily the end of humanity though, and we must ponder the question where the hope lies in avoiding such an apocalypse, if not this time then a next time.
We are now with so many and are so spread around the world, with so many well-stocked mountain bunkers and so many ocean-going ships that carry enough canned-food to last its crews for centuries, that it is actually hard to envisage a nuclear war massive enough to wipe us out as a species. The same would go for biological wars or wars fought by robots and the like: very likely, in several places pockets of us will survive with enough food to weather the aftermaths of any massive war, even if it is just a small groups of us living on Antarctica, in nuclear submarines, or on the moon.
We humans are pretty fast breeders so even if no more than 1,000 of us survive, at a 2% yearly rate of increase, we’d be back to a billion within about 700 years (give or take a century: this is without using a calculator). At a 1% rate, it would take us 1,400 years to return to a billion. If we’d restart with just 2 humans and have only a 1% rate of increase per year, we would still be back to a billion in about 2,000 years. So in the time-scales we are thinking of, a massive devastation that wipes out 99.999999% of us would be no more than a temporary blip in human evolution.
The surviving humans would have the same key traits that make an apocalypse seem inevitable now: the basic intelligence to come up with devastating weapons, a relentless drive towards control over our environment, and other humans just as smart who compete in the here and now for dominance by willingly running risks of massive conflicts.
Is the species destined for a cycle of near-destruction, followed by a recovery of our numbers, followed by a new arms race that eventually leads to near-destruction? Is this the pattern we’re looking at for the next million years or so, with only small differences from one cycle to the next (such as in the other species that make it to the next cycle alongside us, or the climate that the survivors live in)? Is there an alternative scenario in which a cycle ends not with near-destruction but with something else wherein the odds of near-destruction are truly zero?
It is hard to envisage an escape from the cycle because the three ingredients (intelligence, dominance drive, short-run incentives) seem innate to biological evolution. Any biological species as smart as us would have the same problem because both the dominance drive and the short-run incentives are a consequence of evolutionary pressures and our biology: a species without the relentless dominance drive is an evolutionary dead-end that would quickly be killed off by the more aggressive specimens. So the dominance drive is truly a given.
The short-term incentives are given too because the nature of our neural intelligence is adaptive, meaning that we constantly change our mental make-up in order to do well in the evolving environment. This precludes the very possibility that we could live for the long-term, even if our bodies could keep going for millennia: our very nature is to constantly change and hence our current selves slowly die, replaced by different selves that care about something different. Human minds that don’t change and that care for the millennia ahead rather than the next few generations are not human minds at all.
Is there hope in non-human intelligence then? Or the possibility that some group wins and establishes a world empire that subdues everyone into a peaceful society? Let’s take the two in turn, starting with the idea of the world empire, a staple of many philosophies.
Humanity has had enough experience with empires to know they are not a solution to the problem of conflict between technologically advanced humans. The dominance drive simply gets played out within the empire’s higher echelons. This was true for the Chinese empires, the Egyptian empires, the Ottoman empire, the Roman empire, the Japanese empire, the British empire, the Soviet Union, the EU, etc.: now and then their elites split up so that you got massive civil wars or break-ups of the empire into smaller regions that then started to fight each other. In each empire, there were enormous conflicts within the elites. Palace guards, eunuchs, the various sons of the emperor, competing democratic parties, the noble families, competing ministries, Pretorian guards, religious leaders, etc.: they were all constantly conniving and scheming against each other, using all means at their disposal, including assassination and the use of the latest technology.
So we should not believe that human conflict will be over if the world ‘comes together as one’. It is a fantasy that hides megalomania. An empire is even more dangerous than competing nation states, because with an empire the whole world would be involved in an internal conflict whereas with competing nation states there might be a few countries left out of the firing line. There is thus no real hope in a human-lead empire to avoid an apocalypse, though there is no shortage of humans shouting loudly that all will be well if they are in charge of everything. Such humans are normal and that’s the problem.
How about the idea that we humans stop being in charge altogether and that machines truly take over, such that we follow their orders? How could that help prevent the seemingly inevitable cycle of near-destruction when any machine that would lord it over us will be designed on the instructions of humans? Is it conceivable that groups of humans will band together to consciously design artificial intelligence systems to truly enslave us to the will of those AI systems, for our own benefit? Wouldn’t we just rise up against such slavery, leading to either a new cycle (if we win or if we lose to a benevolent AI) or our complete destruction (if we lose to a not-so-benevolent AI)?
I can see a scenario, but it is an odd one. It is the scenario in which AI-style machines are the gods we consciously create and then willingly worship. I can imagine whole populations actively building and maintaining a Jesus Christ, a Zeus, a Buddha, and a Horus. Indeed, if one powerful group of humans does this for their god, I can see lots of other populations building their own Gods to keep up.
How would this go, given the immense current limitations of AI and machines? I am not a programmer who is completely up to date about the current possibilities and bottlenecks, but I do know that we are still decades away from human-type intelligence and that AI systems are now so complex that no human knows all about this vast area either.
Individual humans and even teams of humans working on new AI see but a small part of the technology. So if we do create our own gods, it will be without any team of humans truly understanding what they are building, which means the results will be unexpected: it will be trial and error on a massive scale. Building our gods might take centuries and involve the active participation of simpler AI systems.
We have done this type of thing before. Just think of the cathedrals of Europe, like the Notre Dame or the Sagrada Familia. They took centuries to build, involving many generations. There were many changes along the way to the original design of the cathedrals, and all the old cathedrals have had major repairs and additions. What fueled their construction and maintenance was the belief of the local population that they were involved in a holy enterprise. The population that built them were competing with other populations to have more magnificent cathedrals.
I can see how populations could start to regard AI and robotics as the new cathedrals. Once the idea takes hold that we can build our own gods, I can see enormous enthousiasm emerging for such projects. AI will stop being something that is feared and will start to be something we crave.
Humanity as a whole would get different gods built by different populations. The natures of these gods would vary just as religions currently vary: the gods would reflect the religiosity of the populations that built them. Some would have a thirst for knowledge, others for conquest, yet others for beauty, perhaps even a few for sex or humour.
Would these gods help humanity escape the cycle though or would the devastation just be inflicted by competing gods rather than competing humans?
Though it is entirely possible that the gods we create in this scenario fight each other, or that at least a few of them start massive wars, there is hope in these gods precisely because some will differ from humans. When humans envisage gods, they usually do not truly think of their own actual natures, but envisage something they think is better than them. They will thus build gods in the image of what they hope for and aspire to.
There is thus the possibility that humanity would get a ruling religious pantheon where the gods have made a deal with each other to keep the peace. The gods would have different groups of humans worshipping different gods, adding to them. If we are lucky, gods might emerge that do not have short-lived incentives or innate competing dominance drives. Then we might survive without cycles of devastation.
Who knows, maybe we will get lucky and do this first-time round?
So there you have it: my odd hope that the seeming inevitability of human self-destruction in the coming millennia might be overcome by our religiosity. We just might at some point create the gods that will save us from ourselves by enslaving us. What a thing to hope for!