Financial crime is no longer a peripheral concern for banks and fintechs; it is a defining operational challenge. The pressure to grow transaction volumes, onboard customers quickly, and keep pace with increasingly sophisticated fraud actors has placed finance and compliance teams at the very heart of business strategy. For many institutions, the question is no longer how to use artificial intelligence in their fraud detection stack, but how to use it responsibly.

In this Security Strategist podcast, hosted by Jonathan Care, Senior Lead Analyst at KuppingerCole, he speaks with Kunal Datta, Chief Product Officer at Unit21, about the changes in financial crime prevention technology and the gaps that remain in the industry.

The role of AI in fraud detection

For most of the past two decades, financial crime prevention operated on one of two tracks. Larger, data-rich institutions invested in machine learning models capable of identifying complex behavioural patterns across millions of transactions. Smaller players, or those entering new product categories with thin data histories, tended to rely on rules-based systems, which are explicit, human-authored logic that flags transactions meeting predefined criteria.

Both approaches have genuine strengths. Rules-based systems are auditable, easy to explain to a regulator, and quick to update when a new fraud typology emerges. Machine learning systems are far more powerful at surfacing non-obvious correlations and adapting to evolving attack patterns, but they require substantial training data and significant engineering effort to deploy.

The arrival of large language models and generative AI has introduced a third paradigm, one that is fundamentally non-deterministic. Unlike a rule that fires predictably on every run, or an ML model that produces a consistent probability score for a given feature vector, a generative AI system may reason differently across identical inputs. This has profound implications for how institutions build, test, and govern their fraud detection infrastructure.

Balancing revenue growth and fraud risk

Perhaps the most underappreciated tension in financial crime prevention is not technical; it is commercial. Every fraud control is also a friction point. A transaction declined as suspicious is, from the customer's perspective, simply a transaction that failed. Every false positive erodes trust, damages conversion rates, and risks losing a customer to a competitor with a more permissive onboarding flow. According to Datta:

“Machine learning excels at identifying complex patterns, but rules-based systems can quickly adapt to new types of fraud that humans can spot with minimal examples.”

This means that fraud teams are never simply optimising for fraud prevention in isolation. They are solving a constrained optimisation problem that is minimising fraud losses while simultaneously protecting revenue, preserving customer experience, and staying within the bounds of what regulators require. AI can shift that frontier, enabling more precise risk assessment that reduces both fraud and false positives simultaneously. But only if it is deployed and governed carefully.

The future of AI in financial crime

Looking forward, Datta sees the trajectory of AI in financial crime prevention pointing towards systems that combine the pattern-recognition power of machine learning with increasingly robust mechanisms for transparency and accountability. The goal is not to choose between a powerful AI and an explainable one — it is to build infrastructure that delivers both.

Several technical approaches are emerging to close this gap. Structured output formatting — requiring AI systems to return decisions in machine-readable formats like JSON, with explicit reasoning chains, makes it possible to audit AI behaviour at scale. Evaluation sets, which establish a curated baseline of labelled cases against which model performance is continuously benchmarked, allow institutions to detect drift and maintain defensible performance records. 

Are you enjoying the content so far?

The institutions that will lead this space are those treating AI governance not as a compliance overhead but as a competitive advantage. A well-governed AI system is faster to get regulatory approval, faster to deploy new capabilities, and more resilient when regulatory scrutiny increases.

The most striking thread in Datta's thinking is his insistence on placing financial crime prevention within a broader moral frame. Financial crime is not merely an operational risk; it is a conduit for some of the most serious harms in the world: human trafficking, modern slavery, terrorist financing, and the systematic exploitation of vulnerable people. Viewed through this lens, the deployment of better AI in financial crime prevention is not primarily a business efficiency story. It is a contribution to a more just and safer world. Datta says:

“AI should be viewed not only as an efficiency driver but as a tool to address broader societal issues like human trafficking and exploitation. Better detection is a moral obligation.”

This framing matters for how organisations think about investment in financial crime technology. If AI in fraud prevention is purely a cost centre, it will always lose budget battles to revenue-generating activities. 

If you would like to find out more, visit: Unit21.ai or read more about Rules vs. Machine Learning: Finding the Best of Both Worlds by Kunal Datta.

If you are looking to strengthen how your organisation identifies and manages risk, you can request a personalised demo with Unit21.

Takeaways

  • Evolution of financial crime detection over the last decade
  • Deterministic vs non-deterministic AI systems in fraud prevention
  • The role of generative AI and context engineering in compliance
  • Accountability and explainability in AI-driven decision making
  • Regulatory perspectives on AI and risk management