what is eu ai act

The EU AI Act has officially come into force as of Thursday. The controversial act will apply new guardrails, consumer rights, and liability controls in the development of artificial intelligence. 

Initially proposed by the commission in 2021, the AI EU Act is the first groundbreaking legislation on establishing a comprehensive legal framework around all new AI technologies.

This article delves deep into the EU AI Act, exploring what the legislation is, how it works, and how organizations around the world can stay compliant.

What is the EU AI Act?

The EU AI Act is a piece of legislation that aims to ensure the safety of AI systems for countries within the European Union. It will provide legal certainty for investments and innovation in the AI space, and minimize risks to consumers.

The act follows a risk-based approach to controlling AI, imposing stricter rules and limiting systems with a higher risk potential. 

The act was created as a landmark bill to mitigate harm in areas where using AI poses a risk to fundamental rights, including healthcare, education, border surveillance, and the public, as well as banning any AI that could pose an “unacceptable risk” to the public.

what is the eu ai act?
EU AI Act rapporteur Brando Benifei speaking at EU parliament. 

“The EU AI Act sets out transparency obligations for producers, vendors and deployers of limited and high-risk AI Algorithm, said Tudor Galos, Founder of Tudor Galos Consulting and privacy consultant. 

“They need to examine and remove biases, and data quality, provide explainability of AI Systems, have human oversight and aim for accuracy, robustness and cybersecurity."

What AI systems does the EU AI Act Impact?

The EU AI Act categorizes AI systems based on their potential risk levels. This risk-based approach ensures that regulations are proportionate to the potential harm.

Systems categorized as ‘unacceptable’ will be completely banned. Systems can qualify as ‘unacceptable’ if they allow ‘“social scoring” and are considered a clear threat to people's fundamental rights.

Systems categorized as ‘high risk’ will be subject to righteous testing and must reach strict requirements that include risk-mitigation systems, high-quality data sets, clear user information and human oversight before they can be placed on the market. This category will include AI for sectors like healthcare, transportation and critical infrastructure.

‘Limited risk systems’ refers to the risks associated with a lack of transparency in AI usage. The AI Act will introduce specific transparency obligations that keep humans in the loop where necessary.

This includes making people aware, for example, that they are using an AI system such as a chatbot. Users are then able to make informed decisions on whether they continue to use the system when armed with the knowledge that it is based on artificial intelligence. 

Providers will also have the responsibility to ensure that AI-generated content is identifiable. AI-generated text with the purpose of informing the public must be labelled as AI-generated. This ruling also applies to audio and video ‘deep fakes’. 

The EU AI Act will allow the free use of ‘minimal risk’ AI. This will include AI systems such as spam filters or AI video. The vast majority AI systems that are currently in use fall into this category.

What happens if you don’t comply? 

The EU AI Act will be primarily enforced through national competent market surveillance authorities in each Member State. Additionally, a new European AI Office will take up various administrative, standard setting and enforcement tasks, to ensure coordination across the continent.

For organisations that don’t comply, fines for violations of the EU AI Act will depend on the type of AI system, size of the company and severity of the infringement.

Fines for infringing on the EU AI Act include: 

  • 7.5 million euros or 1.5% of a company's total worldwide annual turnover (whichever is higher) for the supply of incorrect information
  • 15 million euros or 3% of a company's total worldwide annual turnover (whichever is higher) for violations of the EU AI Act's obligations
  • 35 million euros or 7% of a company's total worldwide annual turnover (whichever is higher) for violations of the banned AI applications

One key outcome of the most recent negotiations on the EU AI Act will now provide for more significant caps on administrative fines for smaller companies and startups. The EU AI Act will allow natural or legal persons to report instances of non-compliance to the market authority.

What’s next? 

The majority of the Act's provisions will apply after a two-year grace period. However, the regulation's prohibitions will already apply after six months and the obligations for GPAI models will become effective after 12 months.

Despite this grace period, however, Michael Borrelli, Co-CEO/COO of AI & Partners, warns that organisations need to act now if they are to ensure compliance. 

"In the wake of last week's pivotal political approval, the urgency for companies to embark on their EU AI Act compliance journey is undeniable. The significance of this milestone underscores not only the transformative power of artificial intelligence but also the imperative for businesses to align with regulatory frameworks,” Mr Borrelli told EM360Tech. 

Today marks the pivotal moment to proactively navigate the evolving landscape, ensuring ethical and responsible AI practices that not only comply with the EU AI Act but also foster trust, innovation, and sustainable growth in the digital era.

Bart Vandekerckhove, CEO and co-founder of the Data access management company Raito, also sees the urgency in complying sooner rather than later. 

“The EU AI Act is not a stand-alone regulation and should be read in conjunction with the other European Data Privacy and Security Regulations,” he said. 

“Organisations that wait until 2025 to implement AI governance, run the risk of being in breach with the GDPR and the NIS 2 Directive as these regulations also cover concepts such as explainability and data security.”

Complying with the EU AI Act 

The EU seeks to position itself at the forefront of responsible AI development and to ensure that governance keeps pace with innovation in this rapidly evolving space. 

But complying with the act will be no easy task. According to Mr Flenley, the process will require a fundamental shift in the way organisations handle data and adopt AI in the workplace – especially in the financial sector. 

“It’s likely that banks and financial services firms will each assess this within their own risk appetite, leading to inevitable divergence in the extent to which customers of different banks can end up being treated,” Mr Flenley said.

“Consequently, this Act could see the rapid expansion of AI assistants slowed slightly as firms unpick the work undertaken thus far to ensure they don’t fall into the category of ‘Unacceptable Risk.” 

The EU AI Act is likely to set a precedent for the rest of the world, encouraging other government bodies to implement similar regulations. Organizations will therefore need to comply with the EU AI act if they wish to adhere to global compliance standards.

“If you want to develop, sell or implement AI projects in the EU, you need to be compliant with the EU AI Act. The good news is that once you are compliant with the EU AI Act, most probably you'll be 80%-90% compliant with all AI laws around the globe, as many of them have the same principles.”

“This is not a perfect law. But it is a great start that allows companies to better plan the development, testing and implementation of AI projects. I think that we will see many legal interpretations of the EU AI Act paragraphs, starting with the definition of what is and what is not an AI system.”

With all the experts we spoke to, the message was clear: start the process of becoming compliant with the AI now.