em360tech image

Italy’s data protection watchdog has set out a list of demands that OpenAI must meet by the end of April if ChatGPT is to return to Europe's pasta capital. 

The Guarantor for the Protection of Personal Data (GPDP) announced it had blocked access to ChatGPT last month pending an investigation into OpenAI’s suspected breach of the EU’s GDPR and failure to protect the data of children. 

Regulators alleged the Microsoft-backed research firm held " no legal basis that justifies the massive collection and storage of personal data in order to 'train' the algorithms underlying the operation."

Now they have given OpenAI a list of solutions to address their privacy concerns and have agreed to lift their restriction on ChatGPT if it agrees to implement their recommendations for safeguarding users' data and protecting children from its platform. 

"OpenAI will have to comply by April 30 with the measures set out by the Italian [Supervisory Authority] concerning transparency, the right of data subjects – including users and non-users - and the legal basis of the processing for algorithmic training relying on users' data," the GPDP revealed in a statement. 

"Only in that case will the Italian Supervisory Authority lift its order that placed a temporary limitation on the processing of Italian users' data, there being no longer the urgency underpinning the order, so that ChatGPT will be available once again from Italy."

GPDP has ordered OpenAI to notify users how ChatGPT stores and processes data, as well as asking for explicit consent to use people’s data to train their AI models. They must also  allow anyone to request for false personal information generated by ChatGPT to be corrected or removed from the system altogether. 

Users must also be required to confirm their 18 and older before using the software through an age verification process that prevents children below the age of 13 from accessing the chatbot. 

People aged 13 to 18 must also obtain consent to use ChatGPT from their parents or guardians, and all of these changes must be implemented before the April 30 deadline, or ban will stay. 

Many of these policies are in line with OpenAI’s own plans, except for the deletion system requested by the GDPD, which could cause serious issues for the firm is the April 30 deadline is to be successfully met. Regardless, it welcomed the GPDP decision and said it would comply with the Italian watchdog’s demands. 

“We are happy that the Italian Garante is reconsidering their decision and we look forward to working with them to make ChatGPT available to our customers in Italy again soon,” OpenAI said in a statement on Wednesday. 

'A threat to global civilisation

The GDPC said it would continue investigating potential breaches of data protection rules by OpenAI, reserving the right to impose any other measures needed at the end of its ongoing probe.

While Italy’s conundrum with OpenAI appears to have been temporarily resolved, concerns about the firm’s data privacy shortcomings are mounting in other parts of the world.

Regulators in Canada have announced they were probing whether ChatGPT was unlawfully collecting, processing and storing personal data after receiving multiple complaints from officials. 

To read more about GRC and GDPR Regulation, visit our Business Continuity Page. 

Meanwhile, the German commissioner for data protection told Handelsblass newspaper that the country may consider joining Italy and blocking the chatbot for privacy concerns. 

“In principle, such action is also possible in Germany”, Kelber told the German publication, adding that the government had already requested more information from Germany on its ban.

Other European countries, including France, Spain and Ireland have urged the EU’s privacy watchdog to examine OpenAI’s data processing further. 

“We are following up with the Italian regulator to understand the basis for their action and we will coordinate with all EU data protection authorities in relation to this matter,” a spokesperson for Irelands’s Data Protection Commissioner said in a statement. 

Governments aren’t the only ones concerned about generative AI. 1800 public figures including Elon Musk have signed their names to an open letter calling for a six-month pause on training language models more powerful than GPT-4, the technology powering ChatGPT.  

The letter notes that AI systems now have human-competitive intelligence. The authors believe this “could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.” 

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” the letter asks dramatically. “Should we risk the loss of control of our civilisation?”

Regulatory action is imminent 

Experts say new regulations are needed to govern AI due to the technolog's potential impact on national security, jobs and education. 

Dr Ilia Kolochenko, Founder of ImmuniWeb, and a member of Europol Data Protection Experts Network, told EM360 that more regulatory measures the control the impact of AI should be expected.

“Privacy issues are just a small fraction of regulatory troubles that generative AI, such as ChatGPT, may face in the near future, Dr Kolochenko said.

“The regulatory trend is not a prerogative of European regulators, for example, in the United States, the FTC is poised to actively shape the future of AI. The Cyberspace Administration of China is also energetically working on new rules and restrictions for AI companies.”

Dr Kolochenko added that one of the biggest problems with AI models is their method of training data without the consent of the user, which breaks multiple GDPR laws and may infringe on global copyright legislation.

“While modern intellectual property (IP) law provides from little to no protection to copyrighted content, most large-scale data-scrapping practices violate terms of service of digital resources, such as online libraries and websites, and may eventually lead to an avalanche of litigation for breach of contract and interrelated claims.”

“Some jurisdictions may even wish to criminally prosecute such practices under their unfair competition laws,” the data privacy expert added. 

Andy Patel, Researcher at WithSecure, agrees that more bans of ChatGPT should be expected, but believes that the restriction of generative AI at such an early stage of its development is excessive and unnecessary. 

“If Italy’s issue is with Italian citizens interacting with an invasive US technology company, bear in mind that most of the technologies we interact with come from the US, Mr Patel said. 

“US-based social networks already control our discourse. As such, the fact that ChatGPT is hosted by a US company should not be a factor. Nor should concerns that AI might take over the world be.

Dr Kolonchenko joined Patel in stating that banning AI was not the best cause of action to control the impact of AI technology. 

“Banning AI is a pretty bad idea: while law-abiding companies will submissively follow the ban, hostile nation-state and threat actors will readily continue their research and development, gaining an unfair advantage in the global AI race.”