US President Joe Biden has warned that US tech firms must do more to protect US citizens from the potential dangers AI software could pose for society.
He told science and technology advisers on Tuesday, 4 April, that while AI could help in addressing disease and climate change, it was critical that AI developers address potential risks to society, national security and the economy.
“Tech companies have a responsibility, in my view, to make sure their products are safe before making them public,” the US president said at the start of a meeting of the President's Council of Advisors on Science and Technology (PCAST). When asked if AI was dangerous, he said: “It remains to be seen. It could be.”
Biden’s comments come less than a week after an open letter signed by over 1800 public figures, including Elon Musk and Apple co-founder Steve Wozniak, urged tech firms to pause the development of new AI systems for the next six months to prevent the risks “AI experiments” could have on humanity.
Like Biden, the authors of the letter note that the positive possibilities of AI are significant but warn that if scientists continue to train new models, the world could be faced with a harsh reality defined by unregulated AI dominion.
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” reads the letter.
The development of AI has accelerated drastically since the launch of OpenAI’s ChatGPT last November after the chatbot wowed and appalled users with its human-like capabilities and gripped Silicon Valley in an AI arms race to control the market.
The AI-powered chatbot has amassed more than 100 million active users in less than 6 months and is financially backed by Microsoft, which has so far invested $11 billion into the technology.
With the technology developing so rapidly, the authors of the letters believe a pause could allow time for policymakers to make new governance systems for AI and create authorities to track their developments to ensure they are not pursuing dangerous ends.
“This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” the authors write.
“Powerful technologies need safeguards”
In discussing the dangers and risks of AI development, Biden said that social media had already illustrated the damage that new technologies can do without regulation.
"Absent safeguards, we see the impact on the mental health and self-images and feelings and hopelessness, especially among young people," Biden said.
He reiterated the need for Congress to pass bipartisan privacy legislation that restricts the amount of personal data that technology companies collect, bans advertising targeted at children, and prioritises health and safety in technological development.
AI elevates the need for data privacy a hundredfold. In the past, governments could really only stand to reason about *metadata* at scale. Intelligent processing of actual media data (phone call audio, videos, text corpuses) was largely out of reach.
With AI today, particularly…
— Standard Notes (@StandardNotes) March 30, 2023
It is not the first time the president has called for such legislative action. In his second State of the Union address, the US President highlighted the importance of data privacy and emphasised the need for stricter data protection laws to protect US citizens.
"We must finally hold social media companies accountable for experimenting they’re doing [on] children for profit,” Biden explained during his speech, gaining a standing ovation from both sides of the political spectrum.
“It’s time to pass bipartisan legislation to stop Big Tech from collecting personal data on our kids and teenagers online. Ban targeted advertising to children and impose stricter limits on the personal data that companies collect on all of us,” the president said.
With data privacy in the federal spotlight, the US government has recently targeted the video-sharing app TikTok for its algorithmic data collection tactics, which officials claim "jeopardise the privacy of Americans.”
Last week the government called for TikTok CEO Shou Zi Chew to appear before Congress and discuss the app’s handling of data, privacy and security practices with lawmakers.
Though Chew defended TikTok well, the hearing ending with new regulations or an outright ban on TikTok looks closer than ever before.
Like TikTok, OpenAI has also sparked the concern of government officials around the world due to ChatGPT’s method of scraping data from the web to generate its responses, as well as allowing children to access the app despite rules stating that they should not.
It is that concern that this week led to a data privacy watchdog in Italy blocking the chatbot while it conducts an official investigation into whether the technology breaks GDPR legislation.
The move appears to have become a precedent for the rest of Europe, and many other countries, including Germany, Spain and the UK are also considering banning ChatGPT while they investigate its handling of data..
A Danger to Society
While AI seeks to improve the way we work, interact and collaborate, it is advancing at such a rate that even its own creators are concerned with the technology they are building.
“We've got to be careful here. I think people should be happy that we are a little bit scared of this," Sam Altman, CEO and Co-founder of OpenAI said in an interview with ABC News.
“I’m particularly worried that these models could be used for large-scale disinformation,” Altman said. “Now that they’re getting better at writing computer code, [they] could be used for offensive cyber-attacks.”
The CEO assured that OpenAI is conducting enough research into the potential risks of AI to prevent it from inflicting any serious damage to society, but warned that other developers may not be as careful.
“There will be other people who don’t put some of the safety limits that we put on,” he added. “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”
Other big players in the tech world echo Altman’s caution. Elon Musk, who co-founded OpenAI with Altman before leaving the company in 2018 has recently warned as well as signing a letter to halt the development of the technology.
The CEO has publicly expressed his fear of AI software on multiple occasions, calling for regulatory action on a technology he deems to be a danger to society.
Summary of argument against AI safety pic.twitter.com/Vmg4yJm22Y
— Elon Musk (@elonmusk) April 4, 2023
“This is a case where you have very serious danger to the public, and there needs to a public body that has insight and then oversight to confirm that everyone is developing AI safely, Musk said in an Interview with Reuters.
“The danger of AI is much greater than the danger of nuclear warheads, and nobody would suggest that we allow anyone to just build nuclear warheads if they want. Mark my words, AI is far more dangerous than nukes. So why do we not have regulatory oversight?”
Whether the US is anywhere close to introducing regulatory measures to manage the rapid development of AI technology is yet to be seen. But Biden’s comments this week are certainly a sign that such a motion may already be on the administration's agenda.