An Ex-researcher Alleges That OpenAI Places Profit Above AI Safety

Getting your Trinity Audio player ready...

Remember Skynet, the artificial intelligence antagonist from The Terminator that tries to wipe out humanity? What was once an outlandish sci-fi concept is looking increasingly probable as AI systems become more advanced and powerful.

Why would I say that?

Well, Sam Altman, the CEO of OpenAI, the company behind ChatGPT, has expressed his will to potentially develop and release an artificial general intelligence (AGI) system in the future.

What is AGI?

AGI stands for Artificial General Intelligence, and it refers to the development of AI systems that can match or surpass the general intelligence of humans across a wide range of cognitive tasks and domains.

Without going away from the topic, AGI, in layman’s terms, would be an AI system that can think, reason, learn, and solve problems at the level of a human being or beyond. It can also adapt and apply its intelligence to any task or situation, just like a human can.

When OpenAI was founded in 2015, its initial focus was on developing advanced AI systems and algorithms that would help the world.

As the company made breakthroughs like GPT language models, the ambition towards developing AGI grew stronger within the company.

Altman, in several interviews, always believed and said the journey towards AGI is on a fast track “with appropriate safety measures.” He believes this would change the world for the better.

The “Game of Thrones” Saga Continues

Last November, Altman was ousted from his position as CEO, which resulted in the mass resignation of several hundred employees, including Greg Brockman.

It is said that Sutskever also (allegedly) voted to remove Altman from the position of CEO. He had been concerned that Altman was pushing AI technology “too far, too fast.” But days later, he signed an employee letter and called Altman to return.

Altman was back and reinstated as CEO.

Last week, OpenAI Co-Founder and Chief Scientist, Ilya Sutskever, resigned and announced that he would be leaving the company after working for almost 10 years.

This news came right after OpenAI announced it would make its most powerful AI model yet, GPT-4o, available for free to the public through ChatGPT.

Jan Leike, another executive, also announced his resignation, and this created quite a buzz on every social media platform in the technology space.

Both Sutskever and Leike were both responsible for creating and overseeing safety measures for AGI, and their so-called team, “Superalignment”, was disbanded.

Leash-free AI?

Leike was the head of alignment and the superalignment lead at OpenAI. His exit soon after Sutskever definitely raised a lot of eyebrows.

Several posts on X replying to Leike’s resignation asked questions like, “What did you see?” or “What did Ilya see?” The very next day, however, Leike shared a series of posts on X stating that OpenAI isn’t taking safety seriously enough.

In one of the posts, Leike said, “But over the past years, safety culture and processes have taken a backseat to shiny products.”

Of course, the series of posts by Leike has ruffled the feathers of technology critics, who have been concerned over AI safety ever since ChatGPT was announced, especially given the potential workplace implications of such powerful AI systems.

Altman has also replied, “I’m super appreciative of Jake Leike’s contributions to OpenAI’s alignment research and safety culture, and very sad to see him leave. He’s right that we have a lot more to do; we are committed to doing it. A longer post will be made by me in the next couple of days.”


OpenAI has long branded itself as a leader in “ensuring advanced AI systems are safe and beneficial.”

However, the high-profile safety resignations and the consequent direct criticism from the former executive threaten the carefully cultivated public image of the company.


If this article provided you with value, please support my work — only if you can afford it. You can also connect with me on X. Thank you!