FTC investigating ChatGPT For Not Protecting Users

Published on
FTC chatgpt

The Federal Trade Commission (FTC) has opened an investigation into OpenAI over whether its AI chatbot ChatGPT breaks consumer protection laws by putting personal data at risk. 

According to a report by the Washington Post, the US competition Watchdog sent the San Francisco company a 20-page demand for records this week regarding how it addresses risks related to its AI models. 

The move represents the strongest regulatory threat the company has faced yet following the launch of its AI chatbot ChatGPT last November, which has taken the world by storm while raising concerns about its potential risks. 

The FTC is examining whether OpenAI engaged in unfair or deceptive practices that could have caused “reputational harm” to its consumers. 

This largely relates to the potential for ChatGPT to “generate statements about real individuals that a false, misleading or disparaging.”

An infamous example of this is when ChatGPT falsely accused a US law professor of committing harassment, citing a non-existent Washington Post.

The FTC has also requested that OpenAI disclosed the data it used to train the large language models (LLMs) that power AI products like ChatGPT and its image generator DALL-E 2 – something OpenAI has declined to do up to this point. 

The FTC wants to know whether OpenAI obtained the data from the internet directly via scraping  or by purchasing it from third parties

It also asks for the names of the websites that data has been taken from, as well as the steps OpenAI took to prevent personal information from being included in the training data.

CopyGPT

The move comes days after US Comedian Sarah Silverman joined two other US authors in suing OpenAI for using their work for training ChatGPT without their consent. 

Silverman and the authors accused OpenAI of scraping “blatantly illegal websites” and “shadow libraries” including Biliotik hosting their books to train ChatGPT. 

Evidence of this, they claim, lies in the fact that ChatGPT can give accurate summaries of their copyrighted work in seconds, despite the fact they “did not consent to the use of their copyrighted books as training material”

Silverman’s lawsuit is just the latest legal action targetting AI companies like OpenAI. At the end of June, a US law firm hit OpenAI with a $3 billion lawsuit for violating privacy laws by scraping data from the web to train ChatGPT. 

In each case, the plaintiffs joined in demanding OpenAI to reveal where it scraped its data from, something which the firm has continuously denied to do. 

The FTC called on OpenAI to provide detailed descriptions of all complaints it had received of its products making “false, misleading, disparaging or harmful” statements about people. 

OpenAI CEO Sam Altman said in a tweet Thursday evening that the company will “of course” work with the agency.

“It is very disappointing to see the FTC’s request start with a leak and does not help build trust,” Altman tweeted. That said, it’s super important to us that [our] technology is safe and pro-consumer, and we are confident we follow the law.”

“We built GPT-4 on top of years of safety research and spent 6+ months after we finished initial training making it safer and more aligned before releasing it. we protect user privacy and design our systems to learn about the world, not private individuals.”

Regulating ChatGPT

This is not OpenAI’s first run-in with the FTC. The agency has issued multiple warnings that existing consumer protection laws apply to generative AI, even if lawmakers continue to outline new regulations for the emerging tech. 

The FTC’s demands for OpenAI are the first indication of how the agency intends to enforce those warnings on AI companies. 

If the FTC finds that a company violates consumer protection laws, it can enforce fines or put a business under a consent decree, which will allow it to dictate how the company handles data. 

To read more about AI and data privacy, visit our dedicated AI in the Enterprise Page. 

It is not stranger to taking such action on large tech companies and has previously threatened to ban giants including the likes of Meta from profiting from people’s data

“The FTC welcomes innovation, but being innovative is not a license to be reckless, Samuel Levine, the director of the agency’s Bureau of Consumer Protection In a speech at Harvard Law School in April.

“We are prepared to use all our tools, including enforcement, to challenge harmful practices in this area,” Levine added. 

The US is still far behind in introducing any sort of solid legislation that prevents AI companies from harvesting data from the web. The EU, however, continues to edge closer and closer to implementing such a law. 

Last month it moved one step closer to introducing the world’s first ‘AI Act’ after the parliament voted to approve the text of the legislation.

Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now