em360tech image

A US law firm has launched a class-action lawsuit against ChatGPT creator OpenAI, claiming it violated privacy laws by scraping data from the internet to train its tech. 

In an nearly 160-page suit filed in federal court in San Francisco, California, the firm alleges that OpenAI illegally obtained the data of millions of people without paying for it and without consent. 

"Despite established protocols for the purchase and use of personal information, [OpenAI] took a different approach: theft," the complaint reads. 

“They systematically scraped 300 billion words from the internet, 'books, articles, websites and posts – including personal information obtained without consent. [They] did so in secret, and without registering as a data broker as required under applicable law."

OpenAI uses data scraped from the internet to train the generative AI models that power the likes of ChatGPT and DALLE-2. 

The research group’s tech has become immensely popular since the launch of ChatGPT last November, which has gripped Silicon Valley and set off a global AI arms race.

Microsoft has since entered into an $11 billion partnership with the firm, integrating its AI tech into every corner of its empire – from Windows 11 to Azure. 

But the complaint claims that tools like ChatGPT allow OpenAI and Microsoft to “collect, store, track, share, and disclose” the personal data of millions of people, “putting millions at risk of having that information disclosed on prompt or otherwise to strangers around the world.”

This, it states, violates a string of US laws, from the Electronic Privacy Communications Act to the California Invasion of Privacy Act to the Computer Fraud and Abuse Act, Among others. 

Chasing profits

The suit is seeking class-action certification and damages of $3 billion. That figure, however, will likely change, and any actual damages would only be determined if the plaintiffs prevail,  based on the findings of the court.

It is just one of the recent legal filings against AI companies in recent months. In January, Getty Images sued the AI Art generator Stability AI for allegedly stealing millions of copyright-protected images from the web to train its AI image generator Stable Diffusion.

Several experts have warned that the method by which AI firms obtain their data may lead to the work of millions of content creators being stolen, raising questions about the future of creative industries and the ability to tell fact from fiction.

In many jurisdictions, using information without the owner's consent is permitted under certain circumstances, including news reporting, quoting, teaching, satire or research purposes.

While AI developers like OpenAI have used this argument to defend their non-consenting collection of data, it does now apply when they monetise their products. 

The complaint claims that by chasing profits, OpenAI has abandoned its original principle of advancing artificial intelligence “in the way that is most likely to benefit humanity as a whole.”

It seeks to represent a huge amount of allegedly impacted individuals, and, as well as asking for $3 billion in damages, is also asking the court to freeze commercial access to and further development of OpenAI’s products. 

AI doomsday

The suit is just one of the many documents calling for an injunction on AI development. It warns that applications of AI could risk “civilizational collapse,” as several experts have warned. 

In April, over 1800 high-profile figures including Elon Musk, Apple co-founder Steve Wozniak, and AI godfather Geoffrey Hinton signed an open letter urging tech firms to pause the development of new AI systems for the next six months.

“This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” the authors of the letter write.

Governments around the world are also taking note of the rapid advancement of AI. The EU parliament recently passed the world’s first “AI Act” to protect the world against the tech’s “unacceptable level of risk.” 

To read more about AI and regulation, visit our dedicated AI in the Enterprise Page. 

Meanwhile, UK Prime Minister Rishi Sunak recently promised to make Britain the leader of AI regulation during his visit to the White House a couple of weeks ago, calling for leaders to gather in London to “place safeguards” on AI technologies.  

But OpenAI has made it clear it understands the dangers of its technologies. In a blog post last month, CEO Sam Altman and co-founders Greg Brockman and Ilya Sutskever called for regulations to reduce the “existential risk” AI poses.

They warned that a regulatory body equivalent to the International Atomic Energy Agency is needed to protect humanity from the risk of creating something powerful enough to destroy it.