The logic of the inevitable (nuclear) apocalypse. Can the Gods save us?

The probability of a massive nuclear war the next 10 years between any of the 8 current nuclear powers (US, UK, France, Russia, India, Pakistan, NK, Israel) seems low. The bluster of the leaders is supposed to make the threat look a bit bigger than it is in order to get negotiation advantages, but bluster is usually just bluster without consequences because no elite wants to destroy itself or the population it feeds off.

Yet, whilst the threat may be minimal in a 10 year period, it is bigger if we consider a 100 year period. And what about the next 10,000 years, a mere blip when you consider the age of the world? I would guess the odds are about 50-50% that there is no major nuclear war the next 100 years, but I have to say that with our current technology, the odds of a major nuclear conflict the next 10,000 years is close to 100%. Somewhere, somehow, at current trajectory there will be a major f-up leading to a huge conflict that kills the vast majority of us.

Just think of how close humanity has already been the last century. Starting from 1900 AD, we have had 2 world wars, a near-nuclear war (the Cuban missile crisis), several periods of heightened nuclear-threats between two nuclear powers (India/Pakistan, Israel/Iran, NK/US), and rapid further arms development such that more nuclear destruction can be delivered quicker and further than before.

Other devastating technology has also emerged, including biological weapons and automated systems. In the near future we can expect automated weapons systems run by artificial intelligence that will make moves faster than humans can think in order to counter threats by artificial systems that move faster than humans can think. Most of us might be killed in a second for reasons only known to AI systems that perish themselves in the conflicts they start. Not a happy thought.

The odds that human conflict will go seriously wrong seem near-certain in the long run and there doesn’t seem all that much we can do about this either: if the current enemy builds something that can destroy us in 0.0001 seconds, we too must have something that can destroy them in 0.0001 seconds, complete with detection systems that can be fooled faster than any human can correct them.

Also, if we can get a small benefit from appearing truly ready and capable of destroying them unless we get our way, we are certain to do so time and time again. Leaders who gain from playing chicken with our mutual destruction get kudos (just think of JFK!). That is the nature of humanity and of human conflict: we look for small advantages in the here and now, and we reward our leaders for it. We simply do not live only to serve the long-run health of our species. Enough of us live to get the most out of our own short lives to ensure that at least some countries will sometimes have rulers ready to play nuclear-chicken with other.

Humanity as a whole can only live on the edge so long though. Our luck will run out. Or not?

From the point of view of the species, 10,000 years is a mere blip in our evolution, less than 5% of the time it is now believed we humans have been around. It is odd to think that we spent the first 99% of our time on the planet in near perpetual war with other hunter-gatherers, and that the brief period of relative peace we are currently enjoying is likely to end with massive wars that wipe out most of us.

A massive war is not necessarily the end of humanity though, and we must ponder the question where the hope lies in avoiding such an apocalypse, if not this time then a next time.

We are now with so many and are so spread around the world, with so many well-stocked mountain bunkers and so many ocean-going ships that carry enough canned-food to last its crews for centuries, that it is actually hard to envisage a nuclear war massive enough to wipe us out as a species. The same would go for biological wars or wars fought by robots and the like: very likely, in several places pockets of us will survive with enough food to weather the aftermaths of any massive war, even if it is just a small groups of us living on Antarctica, in nuclear submarines, or on the moon.

We humans are pretty fast breeders so even if no more than 1,000 of us survive, at a 2% yearly rate of increase, we’d be back to a billion within about 700 years (give or take a century: this is without using a calculator). At a 1% rate, it would take us 1,400 years to return to a billion. If we’d restart with just 2 humans and have only a 1% rate of increase per year, we would still be back to a billion in about 2,000 years. So in the time-scales we are thinking of, a massive devastation that wipes out 99.999999% of us would be no more than a temporary blip in human evolution.

The surviving humans would have the same key traits that make an apocalypse seem inevitable now: the basic intelligence to come up with devastating weapons, a relentless drive towards control over our environment, and other humans just as smart who compete in the here and now for dominance by willingly running risks of massive conflicts.

Is the species destined for a cycle of near-destruction, followed by a recovery of our numbers, followed by a new arms race that eventually leads to near-destruction? Is this the pattern we’re looking at for the next million years or so, with only small differences from one cycle to the next (such as in the other species that make it to the next cycle alongside us, or the climate that the survivors live in)? Is there an alternative scenario in which a cycle ends not with near-destruction but with something else wherein the odds of near-destruction are truly zero?

It is hard to envisage an escape from the cycle because the three ingredients (intelligence, dominance drive, short-run incentives) seem innate to biological evolution. Any biological species as smart as us would have the same problem because both the dominance drive and the short-run incentives are a consequence of evolutionary pressures and our biology: a species without the relentless dominance drive is an evolutionary dead-end that would quickly be killed off by the more aggressive specimens. So the dominance drive is truly a given.

The short-term incentives are given too because the nature of our neural intelligence is adaptive, meaning that we constantly change our mental make-up in order to do well in the evolving environment. This precludes the very possibility that we could live for the long-term, even if our bodies could keep going for millennia: our very nature is to constantly change and hence our current selves slowly die, replaced by different selves that care about something different. Human minds that don’t change and that care for the millennia ahead rather than the next few generations are not human minds at all.

Is there hope in non-human intelligence then? Or the possibility that some group wins and establishes a world empire that subdues everyone into a peaceful society? Let’s take the two in turn, starting with the idea of the world empire, a staple of many philosophies.

Humanity has had enough experience with empires to know they are not a solution to the problem of conflict between technologically advanced humans. The dominance drive simply gets played out within the empire’s higher echelons. This was true for the Chinese empires, the Egyptian empires, the Ottoman empire, the Roman empire, the Japanese empire, the British empire, the Soviet Union, the EU, etc.: now and then their elites split up so that you got massive civil wars or break-ups of the empire into smaller regions that then started to fight each other. In each empire, there were enormous conflicts within the elites. Palace guards, eunuchs, the various sons of the emperor, competing democratic parties, the noble families, competing ministries, Pretorian guards, religious leaders, etc.: they were all constantly conniving and scheming against each other, using all means at their disposal, including assassination and the use of the latest technology.

So we should not believe that human conflict will be over if the world ‘comes together as one’. It is a fantasy that hides megalomania. An empire is even more dangerous than competing nation states, because with an empire the whole world would be involved in an internal conflict whereas with competing nation states there might be a few countries left out of the firing line. There is thus no real hope in a human-lead empire to avoid an apocalypse, though there is no shortage of humans shouting loudly that all will be well if they are in charge of everything. Such humans are normal and that’s the problem.

How about the idea that we humans stop being in charge altogether and that machines truly take over, such that we follow their orders? How could that help prevent the seemingly inevitable cycle of near-destruction when any machine that would lord it over us will be designed on the instructions of humans? Is it conceivable that groups of humans will band together to consciously design artificial intelligence systems to truly enslave us to the will of those AI systems, for our own benefit? Wouldn’t we just rise up against such slavery, leading to either a new cycle (if we win or if we lose to a benevolent AI) or our complete destruction (if we lose to a not-so-benevolent AI)?

I can see a scenario, but it is an odd one. It is the scenario in which AI-style machines are the gods we consciously create and then willingly worship. I can imagine whole populations actively building and maintaining a Jesus Christ, a Zeus, a Buddha, and a Horus. Indeed, if one powerful group of humans does this for their god, I can see lots of other populations building their own Gods to keep up.

How would this go, given the immense current limitations of AI and machines? I am not a programmer who is completely up to date about the current possibilities and bottlenecks, but I do know that we are still decades away from human-type intelligence and that AI systems are now so complex that no human knows all about this vast area either.

Individual humans and even teams of humans working on new AI see but a small part of the technology. So if we do create our own gods, it will be without any team of humans truly understanding what they are building, which means the results will be unexpected: it will be trial and error on a massive scale. Building our gods might take centuries and involve the active participation of simpler AI systems.

We have done this type of thing before. Just think of the cathedrals of Europe, like the Notre Dame or the Sagrada Familia. They took centuries to build, involving many generations. There were many changes along the way to the original design of the cathedrals, and all the old cathedrals have had major repairs and additions. What fueled their construction and maintenance was the belief of the local population that they were involved in a holy enterprise. The population that built them were competing with other populations to have more magnificent cathedrals.

I can see how populations could start to regard AI and robotics as the new cathedrals. Once the idea takes hold that we can build our own gods, I can see enormous enthousiasm emerging for such projects. AI will stop being something that is feared and will start to be something we crave.

Humanity as a whole would get different gods built by different populations. The natures of these gods would vary just as religions currently vary: the gods would reflect the religiosity of the populations that built them. Some would have a thirst for knowledge, others for conquest, yet others for beauty, perhaps even a few for sex or humour.

Would these gods help humanity escape the cycle though or would the devastation just be inflicted by competing gods rather than competing humans?

Though it is entirely possible that the gods we create in this scenario fight each other, or that at least a few of them start massive wars, there is hope in these gods precisely because some will differ from humans. When humans envisage gods, they usually do not truly think of their own actual natures, but envisage something they think is better than them. They will thus build gods in the image of what they hope for and aspire to.

There is thus the possibility that humanity would get a ruling religious pantheon where the gods have made a deal with each other to keep the peace. The gods would have different groups of humans worshipping different gods, adding to them. If we are lucky, gods might emerge that do not have short-lived incentives or innate competing dominance drives. Then we might survive without cycles of devastation.

Who knows, maybe we will get lucky and do this first-time round?

So there you have it: my odd hope that the seeming inevitability of human self-destruction in the coming millennia might be overcome by our religiosity. We just might at some point create the gods that will save us from ourselves by enslaving us. What a thing to hope for!

This entry was posted in Ask Troppo's Love Gods, Chess, Climate Change, Cultural Critique, Dance, Death and taxes, Democracy, Education, Environment, Ethics, Geeky Musings, Health, History, Humour, IT and Internet, Life, Literature, Philosophy, Politics - international, Politics - national, Religion, Science, Social, Society, Space, Terror. Bookmark the permalink.
Subscribe
Notify of
guest

26 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
John Goss
John Goss
5 years ago

There is a story by Isaac Asimov in which humans build this super computer which is designed to answer all the difficult questions that humans have not been able to answer. They turn it on and decide to test it by asking the most difficult question of all. ‘Is there a God?’
The answer comes back ‘Now there is’.

And the idea you put Paul, that computers will save humanity from its self-destructive impulses has been explored many times in science fiction. But the solutions the computers provide are not always palatable.

paul frijters
paul frijters
5 years ago
Reply to  John Goss

Hi John,

of course this is in the realm of science fiction because many humans like to think about possible futures. Some science fiction will get it more right than others and I hope you’ll allow me my reasoned guess.

I have seen movies with a single computer that belongs to some government (I, robot) or industrialist (blade runner), but the idea of the post is subtly and importantly different: I think the interesting scenario is that we’re actively going to design gods and have long-term programs to build them. On purpose, from the outset, and not to serve us or answers questions, but for us to worship. Many different groups will design different ones, leading to a whole pantheon of them. They are unlikely to be computers as we know them, because computers have very limited senses and sensations. I think we’re likely to build contraptions with eyes, ears, pain, desires, memories, dreams, and whatever else our collective minds can think of. They wont be gods in our image.

If you have seen what I describe in a movie already, then all I can say is I missed that one but I agree with it for my own reasons!

Alan
Alan
5 years ago
Reply to  John Goss

The short story is Answer by Fredric Brown:

Dwan Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore throughout the universe a dozen pictures of what he was doing.
He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe — ninety-six billion planets — into the supercircuit that would connect them all into one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.
Dwar Reyn spoke briefly to the watching and listening trillions. Then after a moment’s silence he said, “Now, Dwar Ev.”
Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel.
Dwar Ev stepped back and drew a deep breath. “The honor of asking the first question is yours, Dwar Reyn.”
“Thank you,” said Dwar Reyn. “It shall be a question which no single cybernetics machine has been able to answer.”
He turned to face the machine. “Is there a God?”
The mighty voice answered without hesitation, without the clicking of a single relay.
“Yes, now there is a God.”
Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.
A bolt of lightning from the cloudless sky struck him down and fused the switch shut.

(Fredric Brown, “Answer”)

paul frijters
paul frijters
5 years ago
Reply to  Alan

Nice, but I suggest a different ending is more appropriate to our nature and my argument:

“Yes, now there is a God.”

There was a collective sigh heard in all the galaxies as Dwar Ev and Dwar Reyn sank to their knees, their faces filled with beatific ecstasy.

“It worked”.

John R Walker
5 years ago

Hi Paul

Think a better term for what you are postulating could be the, all time Philosopher King.

paul frijters
paul frijters
5 years ago
Reply to  John R Walker

not really John. I am talking about your god. As one of many. Scary, I know.

John R Walker
5 years ago
Reply to  paul frijters

Paul
Your description is too dualistic.
Cloud of unknowning , MU to you :-)

Bruce Bradbury
Bruce Bradbury
5 years ago

How long before the Chinese Communist Party becomes an AI? What sort of God will that be?

John R Walker
5 years ago
Reply to  Bruce Bradbury

Fascinating question. Id guess that it would be very fond of building ; cement and reo( possibly also an obsessive micro managing style of god).

paul frijters
paul frijters
5 years ago
Reply to  Bruce Bradbury

yes, they are well on their way with their social points systems and increasingly automated control of online expression.
‘Social harmony’ is their key mantra. Its the type of God that could surprise us if it decided the current leaders are in the way of social harmony.

paul frijters
paul frijters
5 years ago
Reply to  John R Walker

mainly that the US-China rivalry is leading to strife within the elites of Australia. The security apparatus has combined with the Murdoch press to demonize China. Yet, the business community, the Chinese diaspora, and academia has a lot of ties with China and is pushing back. The author of the piece you link to is clearly part of the US-security apparatus, which you also what you see from the affiliation. The content of the piece is far less interesting than who has published it.

John Goss
John Goss
5 years ago

Thank you Alan for correcting my faulty memory as to the author of the story and the exact story line. I love Asimov and he has written so much sf with philosophical implications that I over ascribe to him.

John R Walker
5 years ago

Hi Paul
There have always been plenty of gods that issue instructions like this:

The Slayer Time, Ancient of Days, come hither to consume;
Excepting thee, of all these hosts of hostile chiefs arrayed,
There shines not one shall leave alive the battlefield! Dismayed
No longer be! Arise! obtain renown! destroy thy foes!
Fight for the kingdom waiting thee when thou hast vanquished those.
By Me they fall—not thee! the stroke of death is dealt them now,
Even as they stand thus gallantly; My instrument art thou!
Strike, strong-armed Prince! at Drona! at Bhishma strike! deal death
To Karna, Jyadratha; stay all this warlike breath!
’Tis I who bid them perish! Thou wilt but slay the slain.
Fight! they must fall, and thou must live, victor upon this plain!”

I’d pray that if AI does come to be really powerful that it is more like a Philosopher king than a god.

paul frijters
paul frijters
5 years ago
Reply to  John R Walker

I don’t think populations care enough for philosopher kings to build them. They want Gods, even if they say they pray for something else, for who are they then praying to? Certainly not a philosopher king.

John R Walker
5 years ago
Reply to  paul frijters

Hi Paul
Think this is probably a semantic difference- regardless of what it’s called I’d hope that such an AI behaved like a ideal philosopher king rather than a god – a god without even a hint of the sublime: beyond all mere human comprehension – of beauty and terror , is not much of a god.

Curious if such a thing did suddenly emerge would you yourself be bowing down in supplication?

BTW ever read much by Stanislaw Lem? In particular Solaris (the book, not the film)

paul frijters
paul frijters
5 years ago
Reply to  John R Walker

yes, I can see myself praying to (some of) those gods. Religiosity and the belief in an unseen supernatural need not be the same.
How about you?

John R Walker
5 years ago
Reply to  paul frijters

Possibly the opposite , if it turned out that god really was a knowable thing an object : some old bloke with a beard reclining on a cloud ,I’d have to become a athiest.

BTW MU was not a trivial response to the dualism inherent in your illusions
Regards and love

paul frijters
paul frijters
5 years ago
Reply to  John R Walker

I wouldnt say ‘a knowable thing’ because, like a country or a cathedral, each god would have so many influences making it up that it would appear magical and mystifying to everyone.

I haven’t read the cloud of unknowing, but I am sympathetic towards mysticism. As long as people don’t get carried away by it for too long :-)

John R Walker
5 years ago
Reply to  paul frijters

Fair enough :-)

John Goss
John Goss
5 years ago
Reply to  John R Walker

Such arrogance John! If god wasn’t up to your expectations you’d disbelieve in her/him/it. I know you’re partly taking the mickie out of yourself, but, seriously, I thought god was supposed to be beyond ourselves, not constrained by our intellect or beliefs.

John R Walker
5 years ago
Reply to  John Goss

:-)
Hi John
It’s a bit more like, if god was up to my expectations, then god would not equal god.
BTW
if you’re interested
this might be helpful.

John R Walker
5 years ago
Reply to  John Goss

PS
If you’re interested look up the philosopher Nishida Kitarō and the Kyoto school ( it is fairly hard work but it does stretch the minds envelope )

Anthony
Anthony
5 years ago

I accept the basic premise that, given infinite time, someone launches a nuclear apocalypse. However, that particular threat is not necessarily different to other massive threats we face (antibiotic resistance, CAGW etc). In theory technology can ameliorate or prevent the worst case scenario. For example, say anti-ICBM satellite network or ground based lasers.

So, as long as you can get ahead of the apocalypse before it arrives, you are ok. If you can predict the apocalypse and make a solution, you are ok. The issue is the thing you just don’t see coming. For example: terrorists miniaturise very large nuclear devices.

Chris Lloyd
Chris Lloyd
5 years ago

The answer is diversification. We get a decent colony on the moon (or maybe Mars so we can’t shoot at each other) and the cycle never happens. #elon_musk

John R Walker
5 years ago

Hi Paul
It just struck me that you’re offered up a future potentially fascinating, and quite new, variation on the Turing test -can you tell :is it an god, or is it AI.
Hope all is well for you and your family. Peace
John