AI Could Destroy Humanity, Experts Say

Getting your Trinity Audio player ready...

If you are a sucker for sci-fi movies, you might remember the basic plot of James Cameron’s Terminator movies if you have watched them.

The Terminator’s story revolves around a dystopian future where advanced artificial intelligence systems, known as Skynet, become self-aware and initiate a global war against humanity.

Well, guess what? We are going towards that future; I am not even exaggerating.

A group of top artificial intelligence experts and executives warned that the technology poses a “risk of extinction” in an alarming joint statement released on May 30, 2023.

More than 350 prominent figures, including the “Godfather of AI”, Geoffrey Hinton, and the most popular AI company’s boss, Sam Altman, see AI as an existential threat, according to the one-sentence open letter organized by the non-profit Centre for AI Safety.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Joint statement from the AI experts

The brief statement is the latest in a series of warnings from prominent researchers about AI’s potential to cause social instability, with potential threats including disinformation propagated, massive economic upheaval due to job losses, and even open attacks on humans.

As a matter of fact, Hinton recently resigned from his part-time work as an AI researcher at Google in order to talk more freely about his worries. He recently stated that he now partly regrets his life’s work, which may allow “bad actors” to do “bad things” that will be impossible to prevent.

I mean, this is exactly what the “Father of the atomic bomb,” Oppenheimer, felt when he realized what he had created and what it could do to mankind.

You might have read or heard about billionaire Elon Musk, who was among hundreds of experts who called for a six-month pause in advanced AI development so that leaders could consider how to safely proceed. This is because things are moving extremely fast.

As of this writing, every country with the resources is building their own AI and integrating it into their weapon systems, and that’s fine.

However, what if there have been enough advancements in artificial general intelligence (AGI) that it can do tasks autonomously in the most efficient way possible, and somehow an AI sentient becomes so self-aware that they consider there is no need for humanity?

I recall the interaction between AI sentient Ultron and the superhero group Avengers from the movie Avengers: Age of Ultron.

For those of you who might not know (I am sure everyone knows, but still), Ultron, in the movie is an artificial intelligent program developed for peacekeeping, but it develops consciousness and decides that the only way to achieve peace is to eradicate humanity.

Small snippet of the interaction from the movie | Credit: Marvel

Well, guess what?

Last year, a Google engineer was suspended because he was claiming that the AI chatbot (probably Google Bard) he was working on had become sentient and was thinking and reasoning like a human being. Surely, Google dismissed any such claims.

Sure, here is the corrected and enhanced version of your text:

I’m just saying that the AI race is on, and the market is responding positively to companies that are working on AI or incorporating AI into their businesses.

The sole reason for this is that AI has the potential to disrupt several industries, and investors are eager to get in on the ground floor of this new technology.

However, we should also not forget the potential risks of artificial intelligence. We must demand that leaders around the world enact laws that protect our rights and liberties, and we must hold them accountable when they fail to do so.