What if the very technology designed to protect financial institutions is also being weaponised against them?
That’s the paradox banks are grappling with today.
In early 2024, a Hong Kong company executive joined what seemed like a routine video call with his CFO and senior colleagues. Everything looked and sounded authentic, but it wasn’t. Every participant except the executive was a deepfake clone. By the time the deception was uncovered, US $25 million had been transferred to fraudulent accounts, according to a Deloitte report.
This is no longer a distant threat; it is a reality that businesses are increasingly dealing with. According to Deloitte, by 2027, global fraud losses enabled by generative AI could reach US $40 billion, up from about US $12 billion in 2023. The same AI models that can predict suspicious behaviour or detect anomalies are now being harnessed by criminals to create synthetic identities, forged documents, and hyper-realistic deepfakes.
Fraud is evolving faster than traditional defences can keep up. Rule-based systems designed to catch yesterday’s patterns are being blindsided by AI-driven attacks and increasingly sophisticated social engineering tactics that require technology to analyse behavioural anomalies.
Yet the most powerful response may also lie in AI itself.
According to industry surveys published by Elastic, 91 percent of U.S. banks already use AI for fraud detection, and 83 percent of anti-fraud professionals plan to integrate generative AI by 2025.
In Hong Kong, a recent study by the HKMA and HKIMR found that 75 percent of financial institutions have already implemented or are piloting generative AI, with adoption expected to rise to 87 percent within the next 3 to 5 years. However, adoption levels may vary by industry and company size. Across other APAC markets such as India, Indonesia, and the Philippines, organisations continue to see a rise in real-time payment fraud, social-engineered scams, and identity-related abuse, further accelerating the need for AI-driven detection.
According to insights from the LexisNexis® Digital Identity Network®, which analysed 124 billion transactions across 200 countries, banks leveraging AI-powered models have seen a 260 percent uplift in fraud-detection rates compared to traditional methods. This aligns with Deloitte’s assessment that AI can improve fraud detection by around 20 percent on average, with peaks of up to 300 percent in targeted use cases.
These insights were reinforced during our recent APAC webinar “The Dual Nature of AI: From Detection to Deception,” where industry leaders - Thanh Tai Vo (Director Market Planning, Fraud and Identity APAC, LexisNexis® Risk Solutions), Paul Warren-Tape (Chief of Risk and Product Strategy, IDVerse), and Brad Scoble (Credit and Fraud Manager, TPG Telecom), discussed how AI is reshaping fraud and compliance. Their perspectives spanned financial services, telecom, and digital commerce, sectors experiencing significant growth in AI-enabled fraud patterns across APAC.
Panelists emphasised that AI has become both the magnifier of risk and the multiplier of resilience. The same techniques fraudsters exploit can be used to uncover patterns invisible to human analysts, especially across large, complex datasets.
A recent LexisNexis® Risk Solutions customer success story illustrates how adaptive AI can deliver measurable impact in real-world fraud environments. A large global ecommerce organisation operating across multiple regions faced rising payment fraud and increasing operational strain from manually calibrated rules.
By shifting to continuously learning, AI-driven fraud models, the organisation was able to detect twice as much fraud in its highest-risk segments, while enabling 83 percent of low-risk transactions to be approved automatically. This significantly reduced manual review volumes and customer friction, while allowing fraud teams to focus on higher-risk activity. The AI models continuously improved using transaction and fraud feedback, enabling more confident, data-informed decisions at scale.
Deploying AI defensively comes with its own set of challenges, but it also offers opportunities for businesses to respond to the ever-evolving fraud landscape more quickly.
According to the Monetary Authority of Singapore (MAS), stronger model-risk controls and transparent validation practices are critical to responsible AI adoption, as outlined in its 2024 AI Model Risk Management paper.
In Hong Kong, guidance from the HKMA and the Securities and Futures Commission (SFC) has placed increased emphasis on explainability and responsible AI use. In Australia, regulators are rolling out anti-scam codes that rely on stronger AI-driven identity and transaction verification standards, as highlighted in recent regulatory updates from Kroll.
During the webinar, speakers agreed that technology is only as trustworthy as its explainability. AI-driven decisions must be understandable, appropriate for the problem being addressed, and demonstrably improving outcomes for businesses and consumers. These principles align with the Responsible AI framework established by RELX.
A consistent message throughout the discussion was that responsible AI is not about limiting innovation, it is about enabling trustworthy transformation.
Key recommendations included:
As one panelist noted, “To not embrace AI would be a strategic disadvantage.” The imperative now is to embrace it responsibly.
As AI’s capabilities accelerate, the upside for financial institutions is significant, faster onboarding, sharper detection, reduced losses, and better customer experience. But the cost of inaction is rising even faster: sophisticated scams, operational vulnerabilities, customer distrust, and regulatory penalties.
The paradox of AI in financial crime is not going away. The institutions that turn AI from a threat into an ally will be the ones that earn lasting trust and long-term resilience.
For deeper insights, watch our on-demand webinar, where our panel of experts breaks down the dual nature of AI, from detection to deception.
Sources & References