Is AI as Dangerous as Nukes? OpenAI Thinks so

Published on
AI as dangerous as nukes

OpenAI, the company behind ChatGPT, is warning that AI could pose such a threat to society that it must be subjected to the same regulations as nuclear energy. 

The research group has called for immediate regulation on  “superintelligent” AIs, warning that a regulatory body equivalent to the International Atomic Energy Agency is needed to protect humanity from the risk of creating something powerful enough to destroy it. 

In a short blog post published on the company’s website, CEO Sam Altman and co-founders Greg Brockman and Ilya Sutskever called for an international regulatory body that can establish how to “inspect systems [and] place restrictions on degrees of deployment and levels of security” to reduce the “existential risk” AI poses.

“Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property,” they explained. 

“In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future, but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.”

In the shorter term, the big-tech trio said that there needs to be “some degree of coordination” among AI developers and businesses. 

This coordination will need to come through government intervention, they said, or through a collective agreement across the tech industry to limit growth and development.  

“We’re not just sitting in Silicon Valley thinking we can write these rules for everyone,” Brockman said at the AI Forward event in San Francisco. “We’re starting to think about democratic decision-making.”

Big tech’s atom bomb 

While researchers have been warning of the risk of AI for decades, it is only as development has picked up pace that these risks moved from the realm of possibility to reality. 

Last month, over 1800 public figures including Twitter CEO Elon Musk and Apple co-founder Steve Wozanik urged tech firms to pause the development of AI systems to prevent the threat of “AI experiments” on society. 

​​The authors of the letter note that while the possibilities of technology are significant, the world could be faced with a harsh reality if unregulated development continues. 

“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” reads the letter.

The development of AI has accelerated drastically since the launch of OpenAI’s ChatGPT last November, which has gripped Silicon Valley and locked big tech in an AI arms race as companies fight to control the promising AI market. 

The chatbot amassed more than 100 million active users in less than 6 months and is financially backed by Microsoft, which has so far invested $11 billion into the technology. 

But the technology that powers ChatGPT has raised concerns among experts, who warn that generative AI could have detrimental effects on society. 

Dr Geoffrey Hinton, the man widely regarded as the godfather of AI, recently left his position at Google’s AI division due to concerns about where the technology was heading. 

Dr Hinton said that AI was “quite scary,” warning that the dangers will come when “bad actors” gain access to the technology and exploit it for “bad things.”

“I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have.”

“So it’s as if you had 10,000 people and whenever one person learned something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.”

Current systems like ChatGPT are protected from being exploited for malicious activity through restrictions that limit their functionality. 

But bad actors have already found workarounds to bypass these restrictions, allowing them to weaponise the technology. 

An investigation by Blackberry recently revealed that hackers may already be using the AI tool to launch a range of attacks including phishing campaigns and nation-state cyberattacks. 

Researchers discovered that hackers were bypassing OpenAI’s restrictions using Telegram bots that use OpenAI’S API to create malicious Python scripts to launch malware attacks and craft convincing phishing emails in seconds. 

Worth the risk? 

The US-based Center for AI Safety (CAIS), which works to “reduce societal-scale risks from artificial intelligence”, describes eight categories of “catastrophic” and “existential” risks that AI could pose. 

They worry that the unregulated development of AI could lead to humanity “losing the ability to self-govern and becoming completely dependent on machines”, and a small group of people controlling powerful systems could “make AI a centralising force”, leading to “value lock-in”, an eternal hierarchy of ruled and rulers.

To prevent these risks, OpenAI’s founding trio say that AI must become more open so that people of all can “democratically decide on the bounds and defaults for AI systems.” 

This democratisation, they believe, will actually make it safer to continue the development of AI systems rather than pause it. 

Stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. We have to get it right.

“We believe it’s going to lead to a much better world than what we can imagine today (we are already seeing early examples of this in areas like education, creative work, and personal productivity),” they write. 

“Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on.”

 

Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now