The European Union’s (EU) staple AI legislation has taken a crucial step to becoming law after the EU Parliament voted to approve the text of the legislation.
The regulation, known as the EU AI Act, has now moved to the final stage of the EU’s regulatory process, where officials will push to reach a compromise between the draft of the law approved by parliament, a different version prepared by the bloc’s executive branch, and the wishes of member states.
The AI Act would ban AI systems that present an “unacceptable level of risk,” such as predictive policing tools or real-time facial recognition software, and introduce new regulatory requirements on generative AI tools including OpenAI’s ChatGPT.
It also requires the company to publish summaries of data used for training the technology – a potential barrier for systems that generate humanlike speech by scraping text from the internet, often from copyrighted sources without the creator’s consent.
The threat posed by this legislation is so grave that OpenAI, the maker of ChatGPT, said it may be forced to pull out of Europe, depending on what is included in the final text.
The European Parliament’s approval is a key step in the legal process, but the bill still awaits negotiations with the European Council, whose membership largely consists of heads of state or governments of E.U. countries.
“The vote to bring in a law to govern the use of artificial intelligence is a welcome step,” said Kevin Bocek, VP of Ecosystem and Community at Venafi.
“This law aims to ensure the safety, transparency, non-discrimination, and traceability of AI so that it isn’t exploited or used for malicious means by adversaries.
“The great thing about the EU’s AI Act is that it proposes assigning AI models identities, akin to human passports, subjecting them to a Conformity Assessment for registration on the EU’s database.”
‘High Risk’
The version of the rules approved by E.U. lawmakers on Wednesday says that any AIs applied to “high-risk” use cases like employment, border control, and education must comply with a list of safety requirements including risk assessments, transparency, and logging.
The Act does not automatically consider “general purpose” AI systems like ChatGPT to be high risk, but it does impose transparency requirements and risk assessments for “foundation models,” or powerful AI systems trained on large quantities of data.
To read more about AI, visit our dedicated AI in the Enterprise Page.
Developers of these models, including tech companies like OpenAI, Google, and Microsoft, will be required to declare whether copyrighted material has been used to train their AIs.
“This progressive approach will enhance AI governance, safeguarding individuals and help to maintain control,” Mr Bocek adds.
“For businesses using and innovating with AI, they’ll need to start evaluating if their AI falls under the categories of risk proposed in the AI Act and comply with assessments and registration to uphold safety and public trust.”
A world leader in AI regulation
Unlike domestic lawmakers, the European Union has spent years developing its artificial intelligence legislation.
The European Commission first released a proposal more than two years ago and has amended it in recent months to address new concerns introduced by recent advances in generative AI.
#AIAct just voted! ✅👏
The EU Parliament becomes the first House in the world voting on a comprehensive #AI regulation!
Today’s vote shows that we can reconcile trust and innovation 🇪🇺 pic.twitter.com/Gc3DhbIbbo
— Thierry Breton (@ThierryBreton) June 14, 2023
This contrasts starkly with the progress of U.S. Congress, where lawmakers are still struggling to understand the risks of AI as it grips Silicon Valley.
Meanwhile, the E.U. bill builds on scaffolding already in place, adding to European laws on data privacy, competition in the tech sector and the harms of social media
Those laws have already affected companies’ operations in Europe. This week, Google planned to launch its chatbot Bard in the EU but had to delay the move after receiving requests from Irish Data Protection Commission the provide for privacy assessments.
Italy also temporarily banned ChatGPT amid concerns it violated Europe’s staple GDPR data privacy rules due to the chatbot handling of children’s data.