Last week, we looked at the hypothesis that we are living in an ET’s simulation (In their world, we don’t really exist.)
This week, let’s look at the Berserker hypothesis, the fifth in Williams’s series (counting down): The hypothesis suggests that we haven’t heard from alien civilizations because they’ve been wiped out by their own killer robots. Well, if robotics ran amok…
The word “Berserker” originates in battles in early Europe. Warriors who behaved almost mechanically when slaughtering enemy soldiers sometimes underwent an initiation beforehand in which they would wear a bear’s skin (= the bear’s shirt or ber-serk) in order to leverage the ferocity of that animal. They terrified friend and foe alike with their unreflective ferocity.
Of course, machine warriors would be, for many reasons, much more convenient for these purposes. The concept of pitiless machine warriors gained traction from the 17-book Berserker series (1967 through 2005) by American science fiction and fantasy author Fred Saberhagen (1930–2007). Some of the later books were co-written with other authors.
Here’s Amazon’s summary of the series’ theme:
Long ago, in a distant part of the galaxy, two alien races met—and fought a war of mutual extinction. The sole legacy of that war was the weapon that ended it: the death machines, the BERSERKERS. Guided by self-aware computers more intelligent than any human, these world-sized battle craft carved a swath of death through the galaxy—until they arrived at the outskirts of the fledgling Empire of Man.
These are the stories of the frail creatures who must meet this monstrous and implacable enemy—and who, by fighting it to a standstill, become the saviors of all living things.
So that brings us back to the Berserker Hypothesis as to why we don’t see space aliens, with a twist: The machines that killed them all may be coming for us. That’s why Oxford philosopher Nick Bostrom expressed the hope in an influential 2008 essay that we don’t find any aliens:
To constitute an effective Great Filter, we hypothesize a terminal global cataclysm: an existential catastrophe. An existential risk is one where an adverse outcome would annihilate Earth originating intelligent life or permanently and drastically curtail its potential for future development. We can identify a number of potential existential risks: nuclear war fought with stockpiles much greater than those that exist today (maybe resulting from future arms races); a genetically engineered superbug; environmental disaster; asteroid impact; wars or terrorists act committed with powerful future weapons, perhaps based on advanced forms of nanotechnology; superintelligent general artificial intelligence with destructive goals; high energy physics experiments; a permanent global Brave New World like totalitarian regime protected from revolution by new surveillance and mind control technologies…
Perhaps the most likely type of existential risks that could constitute a Great Filter are those that arise from technological discovery. It is not farfetched to suppose that there might be some possible technology which is such that (a) virtually all sufficiently advanced civilizations eventually discover it and (b) its discovery leads almost universally to existential disasterNick Bostrom, “Where Are They?” at MIT Technology Review (May/June issue (2008): pp. 72–77)
Cambridge physicist Adrian Kent has gone further. In 2011, he suggested that risks like swift mass destruction are why extraterrestrial civilizations are hiding from each other, never mind from us. And that is why we don’t see anybody out there:
It is often suggested that extraterrestial life sufficiently advanced to be capable of interstellar travel or communication must be rare, since otherwise we would have seen evidence of it by now. This in turn is sometimes taken as indirect evidence for the improbability of life evolving at all in our universe. A couple of other possibilities seem worth considering. One is that life capable of evidencing itself on interstellar scales has evolved in many places but that evolutionary selection, acting on a cosmic scale, tends to extinguish species which conspicuously advertise themselves and their habitats. The other is that — whatever the true situation — intelligent species might reasonably worry about the possible dangers of self-advertisement and hence incline towards discretion. These possibilities are discussed here, and some counter-arguments and complicating factors are also considered.Adrian Kent, “Too Damn Quiet?” at arXiv.org
Indeed, as Bostrom notes, extinction by technology might be more likely than extinction by natural disaster. Highly advanced civilizations may be able to control nature more easily than they can control their technology.
The lethal machines, would, of course, need to be self-replicating. Isaac Arthur looks at that concept (the von Neumann machine) here:
The Berserker concept became a board game (1982). No surprise that most of us would be much happier to see the hypothesis demonstrated in art and games than in life.
You may also enjoy:
Seven reasons (so far) why the aliens never show up. Some experts think they became AI, some that they were killed by their AI, and others say they never existed. Who’s most likely right? Science fiction writer Matt Williams delves into seven hypotheses into which scientists and science fiction writers have put a lot of thought.
Are the Aliens We Never Find Obeying Star Trek’s Prime Directive? The Directive is, don’t interfere in the evolution of alien societies, even if you have good intentions. Assuming the aliens exist, perhaps it’s just as well, on the whole, if they do want to leave us alone. They could want to “fix” us instead…
How can we be sure we are not just an ET’s simulation? A number of books and films are based on the idea. Should we believe it? We make a faith-based decision that logic and evidence together are reasonable guides to what is true. Logical possibility alone does not make an idea true.