'Act Now or Get Fined': Experts React to the EU AI Act

Published on
Experts react EU AI Act

After months of negotiations, the European Parliament has finally passed its provisional agreement on the EU AI Act, ushering in new guardrails, consumer rights, and liability controls for the development of artificial intelligence. 

But just over a week after the act passed, experts are concerned that organisations aren't doing enough to prepare for this new legislation – which will likely set the benchmark for AI regulation across the globe.

What is the EU AI Act? 

The EU AI Act is a piece of legislation that aims to ensure the safety of AI systems on the EU market, provide legal certainty for investments and innovation in the AI space, and minimize risks to consumers.

It follows a risk-based approach to controlling AI, limiting systems with a higher risk potential through stricter controls on their development. 

The act was conceived as a landmark bill to mitigate harm in areas where using AI poses a risk to fundamental rights, including healthcare, education, border surveillance, and the public, as well as banning those that pose an “unacceptable risk” to the public. 

what is the eu ai act?
EU AI Act rapporteur Brando Benifei speaking at EU parliament. 

“The EU AI Act sets out transparency obligations for producers, vendors and deployers of limited and high-risk AI Algorithm, said Tudor Galos, Founder of Tudor Galos Consulting and privacy consultant. 

“They need to examine and remove biases, and data quality, provide explainability of AI Systems, have human oversight and aim for accuracy, robustness and cybersecurity."

What AI systems does the EU AI Act Impact?

Because the final text of the Act has not yet been published, very little is known about the actual text of the agreement – including how it classifies the risk profile of AI systems. But this reportedly aligns with the approach proposed during the parliament’s initial negotiating period.

This means that AI systems that the European parliament believes create an unacceptable risk and contravenes EU values and fundamental rights, will be banned in the EU

AI systems with an 'unacceptable risk’ include:

  1. Biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, orientation, race).
  2. Untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases.
  3. Emotion recognition in the workplace and schools. 
  4. Social scoring is based on social behaviour or personal characteristics.
  5. AI systems that manipulate human behaviour to circumvent their free will.
  6. AI that is used to exploit the vulnerabilities of people.
  7. Certain applications of predictive policing.

While Biometric identification systems will be banned in principle, a special agreement has been reached for their use in publicly accessible spaces for law enforcement purposes. This means that they may only be used after prior judicial authorisation and for the prosecution of a strictly defined list of crimes, and national data protection authorities will need to be notified when are being used, among other, unique exceptions. 

AI systems the EU parliament deems are ‘high-risk’ – which include AI screening tools for education and recruitment, medical devices, and systems used in law enforcement, border control, administration of justice and democratic processes – will not be banned. 

Instead, developers will need to adhere to mandatory compliance obligations, and conformity assessments to evaluate their compliance with the Act, with an emergency procedure "allowing law enforcement agencies to deploy a high-risk AI tool that has not passed the conformity assessment procedure in case of urgency".

AI systems classified as limited-risk, including chatbots and certain emotion recognition and biometric categorization systems, as well as systems generating deepfakes, will be subject to more minimal transparency obligations. 

These transparency requirements include informing users that they are interacting with an AI system and marking synthetic audio, video, text and image content as artificially generated or manipulated for users and in a machine-readable format.

Foundational Models like large language models (LLMs), however, will be subject to special safeguards for General-Purpose AI (GPAI) Models that are yet to be decided. 

The EU parliament believes GPAI systems can cause a systemic risk at the EU level – therefore requiring special yet strict regulation. 

The most famous example of these GPAI models is GPT-4, which is the LLM that powers OpenAI’s explosive chatbot ChatGPT. 



“Perhaps curiously, the timing of both the Act and the rate of change in the industry, means that generative AI and foundation models have no specific provisions in the Act,” Matthew Flenley, Head of Marketing at Datactics

“Rather than use this as a clean slate to go and build any form of GPT, firms should (and, most likely, will) widen their application of this Act to all and any such AI developments they undertake from here on in.

What happens if you don’t comply? 

The EU AI Act will be primarily enforced through national competent market surveillance authorities in each Member State. Additionally, a new European AI Office will take up various administrative, standard setting and enforcement tasks, to ensure coordination across the continent.

For organisations that don’t comply, fines for violations of the EU AI Act will depend on the type of AI system, size of the company and severity of the infringement.

Fines for infringing on the EU AI Act include: 

  • 7.5 million euros or 1.5% of a company's total worldwide annual turnover (whichever is higher) for the supply of incorrect information
  • 15 million euros or 3% of a company's total worldwide annual turnover (whichever is higher) for violations of the EU AI Act's obligations
  • 35 million euros or 7% of a company's total worldwide annual turnover (whichever is higher) for violations of the banned AI applications

One key outcome of the most recent negotiations on the EU AI Act will now provide for more significant caps on administrative fines for smaller companies and startups. The EU AI Act will allow natural or legal persons to report instances of non-compliance to the market authority.

What’s next? 

With a political agreement reached, the EU AI Act will soon be officially adopted by the EU Council and published in the EU's Official Journal to enter into force. 

The majority of the Act's provisions will apply after a two-year grace period for compliance.41 However, the regulation's prohibitions will already apply after six months and the obligations for GPAI models will become effective after 12 months.

Despite this grace period, however, Michael Borrelli, Co-CEO/COO of AI & Partners, warns that organisations need to act now if they are to ensure compliance. 

"In the wake of last week's pivotal political approval, the urgency for companies to embark on their EU AI Act compliance journey is undeniable. The significance of this milestone underscores not only the transformative power of artificial intelligence but also the imperative for businesses to align with regulatory frameworks,” Mr Borrelli told EM360Tech. 

Today marks the pivotal moment to proactively navigate the evolving landscape, ensuring ethical and responsible AI practices that not only comply with the EU AI Act but also foster trust, innovation, and sustainable growth in the digital era.

Bart Vandekerckhove, CEO and co-founder of the Data access management company Raito, also sees the urgency in complying sooner rather than later. 

“The EU AI Act is not a stand-alone regulation and should be read in conjunction with the other European Data Privacy and Security Regulations,” he said. 

“Organisations that wait until 2025 to implement AI governance, run the risk of being in breach with the GDPR and the NIS 2 Directive as these regulations also cover concepts such as explainability and data security.”

Complying with the EU AI Act 

With the imminent entry into force of the landmark EU AI Act, the EU seeks to position itself at the forefront of responsible AI development and to ensure that governance keeps pace with innovation in this rapidly evolving space. 

But complying with the act will be no easy task. According to Mr Flenley, the process will require a fundamental shift in the way organisations handle data and adopt AI in the workplace – especially in the financial sector. 

“It’s likely that banks and financial services firms will each assess this within their own risk appetite, leading to inevitable divergence in the extent to which customers of different banks can end up being treated,” Mr Flenley said.

“Consequently, this Act could see the rapid expansion of AI assistants slowed slightly as firms unpick the work undertaken thus far to ensure they don’t fall into the category of ‘Unacceptable Risk.” 

The EU AI Act is likely to set a precedent for the rest of the world, encouraging other government bodies to implement similar regulations. Organisations will therefore need to comply with the EU AI act if they wish to adhere to global compliance standards.

“If you want to develop, sell or implement AI projects in the EU, you need to be compliant with the EU AI Act. The good news is that once you are compliant with the EU AI Act, most probably you'll be 80%-90% compliant with all AI laws around the globe, as many of them have the same principles.”

“This is not a perfect law. But it is a great start that allows companies to better plan the development, testing and implementation of AI projects. I think that we will see many legal interpretations of the EU AI Act paragraphs, starting with the definition of what is and what is not an AI system.”

‘Act now or get fined’

With all the experts we spoke to, the message was clear: start the process of becoming compliant with the AI act now, rather than waiting until 2025. 

Michelle Pugh, Data Management Specialist at EM360Tech believes organisations that act long before the grace period ends, will be the ones that stay compliant in the years ahead. 

“In catalyzing the rapid adaptation to the EU AI Act, solution providers wielding AI products assume a pivotal role, delivering targeted benefits crucial for compliance within the BFSI, Legal, and Healthcare sectors,” Ms Pugh said. 

“The imperative is clear—act now, or get fined.”

Join 34,209 IT professionals who already have a head start

Network with the biggest names in IT and gain instant access to all of our exclusive content for free.

Get Started Now