Artificial intelligence (AI) now touches every part of how money moves. Banks, marketplaces and payment providers deploy machineâlearning models to scan enormous volumes of data for signs of fraud; identity checks have been automated with facial recognition and document readers; and regulatory reporting relies increasingly on predictive analytics.Â
At the same time, AI has given criminals new tools. Generative models craft flawless phishing emails, voiceâcloning services mimic executives to approve fraudulent transfers, and synthetic identities built with AI slip through onboarding checks. The result is a paradox: the same technology both shields and threatens our payment systems.Â
Visaâs AIâdriven platforms blocked 80 million fraudulent transactions worth around US$40 billion in 2023, yet cybercrime losses are projected to exceed US$10.5 trillion by 2025.Â
G2A.COM CEO Bartosz Skwarczek summed up the dilemma succinctly when he said that AI is a doubleâedged sword; G2A.COM and its partners use AI to deliver secure services while âmalicious actors⊠also use AI to be smarter with cheating.â Understanding this duality is the first step toward harnessing AI responsibly.
Real-Time AI Fraud Detection and the Deepfake Threat
AIâs ability to process data quickly has transformed fraud prevention. Sophisticated machineâlearning engines can analyse patterns across billions of card swipes, eâcommerce purchases and wire transfers to flag anomalies and suspicious activities faster than any team of human analysts.Â
The PCI Security Standards Council notes that payment providers rely on AI to spot anomalies in enormous global datasets. Systems like Mastercardâs Decision Intelligence examine thousands of variables per transaction, helping to reduce false declines while also catching more fraudbeing better at spotting actual fraudulent transactions.
Surveys show that around threeâquarters of financial institutions now use AI for fraud and financial crime detection and that most report faster response times thanks to automation.
Despite these gains, fraudsters are keeping pace. Generative AI has lowered the bar for phishing and socialâengineering attacks. Siftâs 2025 Digital Trust Index found that AIâenabled scams and phishing reports increased by more than 450 per cent over the previous year.Â
Deepfake technology is especially dangerous; in February 2024, criminals used a deepfake video call to impersonate a companyâs CFO and trick an employee into transferring HK$200 million (roughly US$25 million). Europol warns that organised crime rings are now using AIâgenerated fingerprints, faces and voices to bypass biometric checks.Â
Synthetic identities, which blend real and stolen data, are now the fastestârising threat according to the Federal Reserve, and more than half of banks and fintechs have already run into them during onboarding. That reality demands a layered defence. Swap out static rules for adaptive models that learn from fresh patterns, and complement those with regular redâteam drills and model checks to expose blind spots.Â

Think of your fraud controls as a muscle: the more you test and strengthen it, the better it performs against sophisticated attackers. Human factors matter too: G2A.COM trains its staff monthly to recognise phishing and social engineering. Information sharing is vital; 88 per cent of fraud leaders in BioCatchâs 2024 survey believe stronger collaboration between institutions and regulators is required.Â
In short, fighting AIâdriven fraud demands a mix of smarter algorithms, proactive testing and wellâtrained people.
AIâPowered Transaction Monitoring and AML Risk Scoring
Detecting fraud is one side of the coin; tracing and preventing money laundering is another. AI now plays a central role in transaction monitoring and AML compliance, delivering significant performance gains.Â
The Financial Action Task Force (FATF) notes that AI can identify up to 40 per cent more suspicious activities while reducing compliance costs by 30 per cent. Machineâlearning models evaluate each paymentâs risk in real time, allowing banks to intervene instantly when patterns suggest smurfing, layering or sanctions evasion.Â
This realâtime risk scoring underpins Europeâs transaction risk analysis (TRA) exemption in PSD2, which allows lowârisk transactions to skip strong customer authentication if providers keep fraud rates below strict thresholds (0.13 per cent for transactions up to âŹ100).
Regulators want financial institutions to harness AI responsibly. When the U.S. Treasury and other federal agencies urged banks to innovate with AI for antiâmoneyâlaundering in midâ2025, they paired that call with strict expectations around accountability and transparency.Â
The European Banking Authority took a similar stance, insisting that transactionârisk models remain open to audit. Meanwhile, criminals are already using AI to hide illicit flows: smart mixers reroute crypto transactions through labyrinthine paths, and adversarial data tweaks make funds look clean.Â
This arms race means your models must be tested, retrained, and overseen by people. Compliance officers should always review flagged transactions and own the final decision. It also means you need to know and trust your data.Â
Combine multiple sources, verify where they come from, and apply digital signatures and provenance tracking as recommended by NSA and CISA so the AI you build rests on solid ground.
The Challenge of User Authentication and Verification
AIâpowered verification tools have made user onboarding and authentication smoother and safer. Facial recognition, fingerprint scanning and voice ID are now standard for payments.Â
Behavioural biometrics,monitoring how users type, swipe or hold a device,provide a passive layer of security that is hard to spoof; BioCatch emphasises that these patterns are âalmost impossible to replicateâ. AI also speeds up KnowâYourâCustomer (KYC) checks by automatically reading documents and crossâmatching selfies or live videos.

In one case study, automating KYC with AI cut processing times by 66 per cent and achieved 85 per cent accuracy. Yet AI also enables identity fraud. Deepfake generators can create faces and voices that fool facial recognition and voice ID, prompting 91 per cent of financial institutions to reconsider voiceâbased authentication.Â
Synthetic identities slip through onboarding because AI can fabricate realistic documents; an estimated 85â95 per cent of synthetic identities evade detection. Voiceâcloning attacks (âvishingâ) have already convinced bank managers to authorise transfers.Â
Automation canât replace human judgement; organisations need multiâlayered authentication combining biometrics, device trust signals â like IP address, geolocation, and device fingerprinting â and oneâtime passcodes. Advanced liveness checks (detecting microâexpressions or unnatural pixels) help detect deepfakes. AI can help, but it isnât everything.Â
Behavioural analytics can spot subtle patterns in how users type or move that signal a bot or a deepfake. But education matters just as much. Customers need to know that a bank will never ask for a password or a oneâtime code by phone or email, and employees must be trained to recognise and resist social engineering.
Securing Data with AIâDriven Threat Detection and Encryption
Protecting data is fundamental to payment security. AI helps by finding and securing sensitive information. Intrusion detection systems powered by AI sift through logs and traffic to spot anomalies quickly. Visa credits its AI tools with cutting breach response times in half.Â
AIâbased data discovery scans databases and file systems to locate card numbers or personal details, tagging them for encryption or tokenisation. Automated key management and classification reduce misconfigurations by as much as 90 per cent.Â
Right now, most AI tools donât fully understand the complex maths behind cryptography, so they canât do much to improve it. Still, researchers are experimenting with ways to use AI to strengthen encryptionâlike spotting weak random number generation or suggesting stronger cipher setups.
However, AI can introduce risks. Models trained on confidential data might inadvertently leak it if prompts elicit memorised content; the PCI Council warns against feeding AI systems highâimpact secrets like API keys or raw card numbers.Â
Generative code assistants may propose insecure implementations unless developers review their output. Criminals arenât just using AI to create deepfakes or phishing emails; theyâre also turning it against our core defences. Machineâlearning models can automate bruteâforce attacks, while poisoned training data can distort an AIâs logic from the inside.Â
The remedy is straightforward: be ruthless about what data goes into your models, restrict who has access, and make sure every action is logged. Host your AI systems in secure, isolated environments, have security specialists validate their outputs, and build a âkill switchâ into your incidentâresponse plan so you can shut down any compromised model immediately.
Navigating PCI DSS and PSD2 Compliance in the AI Era
On the compliance front, AI doesnât lower the barâit raises it. The Payment Card Industry Data Security Standard (PCI DSS) still governs how cardholder data is handled, whether by a humanâwritten script or an AI.Â

When the PCI Council issued AIâspecific principles in 2025, they didnât reinvent the rules; they restated them for a new context. AI systems must meet all existing requirements, human owners remain responsible for critical decisions, raw secrets never belong in training data, and every AI action must be logged and auditable.Â
In short, treat your AI like any other sensitive component in your stack: control it, document it, and scrutinise it. These principles underscore that AI is simply another component to be secured and audited within the cardholder data environment.
The EUâs PSD2 regulation introduced strong customer authentication (SCA) but allows transaction risk analysis to exempt lowârisk payments. Organisations that keep their fraud rate under 0.13 per cent for transactions below âŹ100 qualify for a transaction risk analysis exemption.Â
Realâtime AI models are critical here, assessing risk instantly to ensure they stay within this threshold. But European regulators, notably the EBA, insist that these models remain transparent and subject to regular audits. Remember that your exemption will be revoked if your fraud rate goes even a fraction over the limit.Â
So continuously tuning and monitoring your AI is non negotiable. The message from regulators is clear: AI is welcome, but only with both strict control and human oversight.
Building Trust in AI Payments through Governance, Culture, and Human Oversight
No amount of technology can succeed without trust and good governance. Customers must believe their data and money are safe; regulators must trust that institutions are compliant; and employees must trust the AI tools they work with. Transparency helps build that trust.Â
Customers will only trust AI in payments if they understand how it works for them. That means explaining, in plain language, how you use AI to spot fraud or monitor transactions and offering simple ways for users to appeal decisions. As G2A.COMâs Bartosz Skwarczek has noted, even in a digital marketplace thereâs always a person on the other side of the screen. Treat them with respect and openness.
AI is a tool, not a substitute for people. Human oversight anchors good decisionâmaking, and diverse fraud and compliance teams bring cultural nuance and challenge blind spotsâparticularly for global businesses like G2A.COM that serve customers in nearly 200 countries.Â
Continuous training keeps everyone on the front foot against new threats, while regular audits and redâteam drills expose weaknesses before criminals do. Adding an AI ethics committee helps you check models for bias and fairness and stay aligned with emerging regulations such as the EU AI Act.
Technology alone wonât win the security battle; culture will. A company that prizes communication, collaboration and accountability will get far more value from its AI investments than one that simply deploys the latest tool.

Final Thoughts: Payment Security Demands Smarter, Safer AI Adoption
AI is changing payment security at pace. Used wisely, it blocks billions in fraud and streamlines compliance, authentication and data protection. Used maliciously, it fuels deepfakes, automated money laundering and synthetic identities. The main challenge that the industry faces is harnessing the power of AI while also neutralising itâs threats.Â
But the way to do that is already clear: emply adaptive models alongside regular rigorous testing, share any intelligence youâve gathered with everyone who needs to know it, and train your people on how to spot the latest forms of social engineering attacks.Â
Also ground your AI in tried and true compliance frameworks like PCI DSS and PSD2, with clear human oversight and audit trails. Strong encryption to secure your data is a given, as well as tight access controls and sound development practives.Â
But most importantly of all â put trust and transparency at the heart of everything you build. Because when you balance innovation with responsibility, AI can become a powerful shield rather than a weapon used against your organisation.
EM360Techâs ongoing research and insights help organisations decode this complex landscape and translate regulatory requirements into practical steps. Organisations like G2A.COM illustrate the results: as a global marketplace for video game keys, software, subscriptions and gift cards, their commitment to secure, trusted transactions builds a platform that serves millions of users worldwide.Â
Comments ( 0 )