According to a McKinsey report, by 2025, more than 75% of banks with assets exceeding one hundred billion dollars will have integrated AI wherever possible. And one might think that with its adoption, the industry has become flawless. But do you know what happened to the Titanic? It was also "unsinkable" until it sank.

To understand the challenges of artificial intelligence in banking, we must look beyond the adoption statistics. Bankers face real risks of AI in banking: data security, ethics, and risk management. Additionally, banks must consider regulatory requirements, ensure protection of clients' personal data, and integrate AI into legacy systems, which often becomes more complex than the technology itself. AI creates new types of threats: fraud through automated processes, incorrect lending decisions, potential financial data breaches, and even reputational risks. Moreover, clients expect transparency in algorithmic decisions and the ability to control their finances. In short, it's not that simple.

According to McKinsey's forecasts, generative AI could bring the banking sector between 200 and 340 billion dollars annually. Sounds fantastic. But along with the money come problems that cannot simply be ignored: fraud, algorithmic errors, unpredictable system behaviour, and the need for constant model monitoring. Let's figure out what banks are actually facing today with the arrival of AI and how to overcome it without major losses.

Beyond the potential benefits, disadvantages of AI in banking become apparent when examining data quality issues.

em360tech image

Data: The First Major Stumbling Block

AI lives and breathes on data. Without it, it won't help. And that's where the first real problem begins: what if the data is outdated, incomplete, or simply wrong? How to collect it and how to filter it?

AI learns from history: old transactions, credit reports, and client behaviour. And if an error has crept in there, the consequences can be sad. A bank may deny a loan because of outdated data or block a legal transaction because "the machine decided so" based on incorrect or outdated information.

The reasons for the problem run deeper than they seem:

Data Quality: Old systems with accumulated errors over the years and lack of standards.

System Incompatibility: A combination of ancient databases and modern AI algorithms is like connecting a mechanical clock to a smartphone.

Cost and Time: Converting everything to a single data processing system can take years. And even then, half the data may contain "legacy" errors.

In fact, many banks' AI learns from "garbage," and then garbage results become inevitable. The right software easily solves this problem. Providers of banking solutions know how to filter, clean, and correctly transmit data to AI models. Visit page to learn more.

The Bank Doesn't Understand What the AI Is Doing

One of the biggest headaches is when AI works like a black box: you throw something in, and a decision comes out. But how exactly did the system decide that? A mystery. For banks, which must explain everything to regulators and clients, this is critical.

When a client asks a bank, "Why did you deny my loan application?", the bank cannot simply answer, "The algorithm decided so." Such an answer wouldn't be correct in a legal context and certainly wouldn't improve the bank's reputation. Regulators demand transparency, especially in lending and insurance. The US Federal Trade Commission has already started fining companies for unmotivated AI decisions.

Therefore, banks seek a balance between efficiency and transparency. Some have already introduced the role of "AI advocates" and improved software:

  • Break down algorithmic decisions and explain them to clients and regulators.
  • Help with audits and legal reviews.
  • Work on making the algorithm leave "traces" of explained decisions.

The problem has become somewhat more acute with the emergence of generative AI. But quality banking software does the correction work—it filters, normalises, and transmits data to AI in the required format, minimising the risk of errors in the future.

Algorithmic Bias: When the Machine Discriminates

AI projects are known to produce biased results—and the reason is often not in the AI itself, but in the data or algorithms it analyses. For banks, this is serious because it can lead to unfair loan denials or other services for certain groups of people based on race, gender, or social status.

How does it work? If AI learns from historical data where certain groups were systematically denied loans, it simply repeats the past without understanding that this is unfair. For example:

  • Credit scoring systems often rely on FICO scores, not considering regular rental, utility, or mobile payments. Some people end up in a worse position because of this.
  • Amazon attempted to use AI for personnel selection in 2020, but the system's bias against women led to its shutdown.

To avoid such situations, banks actively catch and correct biases. This means:

  • Using diverse data to train AI.
  • Regularly checking decision fairness.
  • Manually adjusting model parameters to ensure equal conditions for everyone.

Some institutions already conduct "AI fairness audits", where independent experts test models for discrimination. This is becoming a new norm in responsible AI use.

Cybersecurity: AI as a Weapon Against You

AI helps banks catch fraud, but it can also become a tool for attacks. According to Deloitte estimates, losses from fraud at American banks could grow from 12.3 billion dollars in 2023 to 40 billion by 2027—and generative AI development plays a key role here.

Criminals use AI for:

  • Complex phishing attacks.
  • Voice spoofing of bank executives (deepfake).
  • Creating bots that bypass security systems.

Imagine a situation: an employee receives a video call from the "director" urgently asking to transfer money. The video is realistic, and the voice matches—but it's a fake. Yes, in 2024, a Hong Kong company lost 25 million dollars through a deepfake call. So this is more than real; one only needs to go to TikTok and search for "deepfake", and you get thousands of realistic fake videos.

Another significant risk of AI in banking emerges when AI models themselves become targets:

  • Criminals can modify training data.
  • Modify algorithm parameters.
  • Introduce "malicious" algorithms to distort decisions.

Banks are forced to invest in multi-level protection: constantly monitoring anomalies, testing models for attack resistance, and regularly updating security systems. This is not cheap: the average large bank's cybersecurity budget grows by 15–20% annually.

Regulators: A Labyrinth of Rules

The financial sector is probably the most regulated of all, and AI adds a million new requirements here. Regulators want to see how a bank makes decisions and guarantees that AI doesn't violate laws and client rights.

The problem is that laws about AI are still being written. Banks implement technologies faster than new rules appear. This creates uncertainty: a system that is legal today may become problematic tomorrow.

  • More than 92% of banks already use AI or plan to.
  • But most are concerned about regulatory uncertainty.
  • Central banks have begun issuing recommendations, but they often differ from country to country.
  • GDPR and data protection laws require client consent for information collection and personal data concealment where possible.

Any violation costs not only millions in fines but also client trust. For example, in 2024, one European bank received a 50 million euro fine for improper AI use in lending.

The Fear That AI Will Replace People

Bank employees often fear that AI will replace them. This fear is real (especially for management positions) and affects the pace of technology adoption. But the problem is much broader: banks simply lack people who understand AI and can control it.

Even the most advanced model cannot work alone. A person is needed to control AI, especially in complex cases that require:

  • Empathy and creativity;
  • Ethical and moral decisions;
  • Understanding the nuances of client situations.

Yes, professions such as call centre operators, customer support staff, auditors, and credit analysts are gradually disappearing. But to launch AI, banks need new specialists: data scientists, ML engineers, and AI ethics experts. Staff training has become critically important:

  • In IBM IBV research, 33% of respondents named talent development as key to AI expansion.
  • Banks are investing not only in software but also in the people who apply it.
  • Training programmes, interdisciplinary teams, and a culture of experimentation are becoming the norm.

Some large banks have already opened their own "AI academies", but demand for specialists still exceeds supply.

Model Hallucinations: When AI Makes Things Up

AI sometimes generates nonsense called "hallucinations". In banking, this can become a real disaster. The model simply makes up information that doesn't exist in the data, and banks, where precision and accuracy are critical, cannot afford such mistakes. Imagine an AI consultant creating a fake payment history for a client to propose an unsuitable credit product or investment service—the consequences can be both financial and reputational.

To avoid this, banks must balance accuracy and AI flexibility. One approach is to create specialised models for specific tasks, which minimises the risk of fabrication but also reduces system versatility. Another approach is to apply so-called "guardrails" for AI: special restrictions that prevent models from "making up" data beyond real indicators.

Integration with Legacy IT Systems

One of the most underestimated challenges of artificial intelligence in banking is integration with legacy IT systems. Many banks still operate on infrastructure written decades ago, which wasn't designed for modern AI solutions. This creates a host of problems:

  • Incompatibility of old databases with new algorithms;
  • Information synchronization problems between different systems;
  • Downtime during updates that can affect client operations;
  • Security risks if outdated systems leave "holes" through which AI or external attackers can processes.

A systematic approach to modernisation becomes critical. Comprehensive solution providers, such as DXC, offer integration of new applications in the financial sector while updating old systems. This allows banks to safely launch AI while ensuring continuous client access to their accounts and services, and minimising operational risks.

The Way Forward

AI truly transforms the banking industry, but this path is not simple and requires a conscious approach. The disadvantages of AI in banking span technology, ethics, regulation, and human factors that require careful consideration. Banks that want to remain competitive must understand these risks and implement AI wisely.

Key priorities for success:

  • Data quality and standardisation—without clean and structured data, AI doesn't work effectively.
  • Transparent models—banks must be able to explain AI decisions to regulators and clients.
  • Active bias fighting—regular audits and algorithm monitoring ensure fair decisions.
  • Enhanced cybersecurity—multi-level AI protection and constant anomaly monitoring.
  • Close cooperation with regulators—predictability and legal compliance minimise fine risks.
  • Investment in people and training—AI requires specialists capable of controlling, analysing, and improving it.

This requires time, resources, and financial investment. But the alternative is worse: banks that fail to address these challenges risk falling behind and losing client trust.

Those who correctly integrate AI gain significant advantages:

  • Increased operational efficiency;
  • Improved client service;
  • Ability to launch new products and services faster than competitors.

The future of banking is being shaped today. The question is not whether your bank will be part of the AI transformation, but whether it will become a leader, overcoming technological, ethical, and operational challenges. Banks that do this right now will gain a significant market advantage for the next 5–10 years.