The Great Filter: Why humans could be into their final years as AI takes control


The Fermi Paradox brings attention to the contradiction between the high likelihood of advanced civilizations existing and the lack of solid evidence to support their existence.

Numerous explanations have been proposed to tackle this paradox, and one of them is the concept of the ‘Great Filter.’

In short, the Great Filter refers to a theoretical event or scenario that hinders the progress of intelligent life in becoming interplanetary and interstellar, ultimately leading to its extinction.

Our planet and predecessors have already faced various challenges in the form of mass extinction events.

A quarter of a billion years ago, for example, The Great Dying, also known as the Permian-Triassic extinction, took place wiping out the vast majority of marine and land species.

READ MORE: Living on the Moon – Starbucks and Uber could be reality for lunar dwellers

However, these extinction events were natural occurrences that arose from the evolution of our planet and solar system. Volcanic eruptions, meteors and asteroid impacts are not man-made.

Now, though, things are changing — and fast.

It appears that humanity may be facing a significant challenge of our own making, with nuclear weapons and biomedical advancements casting ominous shadows.

And, of course, one cannot discuss the casting of dark shadows without discussing artificial intelligence (AI).

According to a recent study published in Acta Astronautica, the idea that AI could evolve into Artificial Super Intelligence (ASI), serving as another Great Filter feature, should be taken very seriously.

Authored by Michael Garrett, a researcher at the Department of Physics and Astronomy at the University of Manchester (UK), the paper explores the possibility of AI becoming Artificial Super Intelligence (ASI).

Garrett highlights the importance of regulating AI effectively to prevent it from becoming a threat to our civilization and other technological civilizations.

According to Garrett, the concept of the Great Filter may impede technological civilizations, such as ours, from colonizing other planets, leaving them (and us) vulnerable to extinction or stagnation without a backup plan.

The urgency of addressing this issue is underscored by Garrett, who argues that the advancement of Artificial Super Intelligence could serve as a critical juncture for societies, possibly resulting in their demise within a span of 200 years, if not properly regulated.

Garrett is not the first to talk about the threat of a Great Filter, and he certainly won’t be the last.

In 2022, a group of scientists associated with NASA outlined the many reasons why the filter “has the potential to eradicate life as we know it.” They warned that “our rate of progress” appears to correlate “directly to the severity of our fall.”

In other words, humans should prepare for an Icarus-like descent.

Intriguingly, the researchers noted the fact that we have already surpassed the ‘Great Filter.’ On the other hand, they said, there is a high chance that it lies ahead in our future.

Essentially, although it is possible that many civilizations do not progress beyond single-cell life, and we have already overcome this obstacle, it’s also possible that we might be on the brink of self-destruction.

And the self-destructive forces could hit long before we have the capacity to venture beyond Earth.

The destructive forces could come in a variety of equally unappetizing flavors, including warfare, resource depletion, engineered pandemics, and runaway AI.

In their publication, the researchers propose that achieving the status of an interstellar species hinges on our ability to acknowledge our current situation and the existential risks we face.

Regarding artificial intelligence, like Garrett, the researchers emphasized the importance of taking proactive measures to prevent it from becoming our ultimate filter and leading to extinction.

They cautioned that waiting until AI is fully developed may be too late to assess its intentions towards humanity based on empirical evidence.

Stephen Hawking famously said that artificial intelligence “will either be the best thing that’s ever happened to us, or it will be the worst thing.”

If we’re not careful,” he added, “it very well may be the last thing.”

Leave a Reply

Your email address will not be published.

Previous Story

‘I’m a cleaning pro – remove stubborn air fryer grease with my trusty homemade solution’

Next Story

Everest climber George Mallory's final letter to his wife published 100 years on