The central role money mules play in laundering the proceeds of global financial crime has long been acknowledged by the banking industry. Yet treatment strategies have been far from consistent and until recently, many institutions didn't typically prioritize real-time mule prevention. However, with the new PSR reimbursement model in the UK shifting liability to both the sending and receiving banks, that mindset is changing. Combined with the evolving threats posed by money mules in the banking system, it’s widely accepted that a fresh approach to deal with them is required.
This includes recognizing that the broad label “money mule,” is less than helpful in building effective models to reliably detect them. Just as scams are sub-classified into romance, investment, purchase and others, money mules require equally nuanced classification, better reflecting their distinct subtypes, behaviors and culpability.
If your organization’s fraud team is not currently developing distinct machine learning scores and detection strategies to cater for each type of money mule, here are some compelling reasons why it should be.
Money mules facilitate fraud, scams and money laundering, often acting as the bridge between criminal enterprises and the financial system. ‘Stop the mules stop the fraud’, so the saying goes, illustrating the truly integral part they play in the success of fraud networks.
The sector today broadly recognizes three distinct behaviors of money mules, defined as follows:
As we examine the distinct behaviors within each category of mule account, it becomes clear why a one-size-fits-all approach falls short of an effective detection strategy. With renewed pressure from regulators for a clearer understanding of incoming and outgoing payment risk across banks’ networks, this analysis is more critical than ever.
At one end of the spectrum are complicit mules: individuals who intentionally open bank accounts to facilitate fraudulent transactions. They typically have direct contact with criminal networks, acting as co-conspirators in the creation of accounts, movement of funds and decisions they make. Operationally, this makes complicit mules slightly easier to detect than other categories, since they tend to follow recognizable patterns of short-term account opening and atypical transactional activity. Detecting them can largely be managed with rule-based monitoring and machine learning models focused on spotting these tell-tale patterns.
As illustrated below, newly-opened complicit mule accounts tend to display a progressive increase in activity, leading to a ‘high-volume’ day when the proceeds of crime funds are deposited. The owner will typically check their account several times in anticipation of the funds arriving, in order to confirm receipt with the fraudster. These are otherwise unusual patterns for a normal new bank account.
In the middle of the spectrum are intentionally-recruited mules: existing bank account holders who are approached and persuaded or coerced into become money mules. Social media recruitment and paid-for advertising campaigns target students and teenagers with the promise of fast money, working from home. While they may not fully understand the scope or implications of their actions, they willingly engage in moving funds in exchange for financial or other incentives.
Recruited mules display significantly different, often more subtle behaviors, posing a greater challenge for fraud detection and requiring distinct detection models. Credit history, changes in transaction volumes and shifting transaction patterns can all help identify this subset. With 50/50 fraud reimbursement rules now in place in the UK, both sender and recipient banks have a vested interest in identifying recruited mules early on. Sophisticated scoring models can help receiving banks detect behavioral patterns that might indicate a recruited mule account. Meanwhile sending banks can monitor for behaviors and traits consistent with the gradual increase in account activity indicative of a recruited mule account preparing to receive funds.
The following graph demonstrates typical progressive changes observed in recruited mule accounts. Initially the account lays dormant, followed by a period of low-level transaction activity consistent with tester payments being made. Eventually a major spike of high-risk activity occurs, indicative of fraudulent funds being moved.
At the most complex and challenging end of the spectrum are the true victims, exploited mules. These are typical current account customers who, through account takeover, manipulation or coercion, unknowingly launder money for fraudsters. Often, these victims believe they are helping a friend, are in gainful and legal employment, or are responding to an urgent request from a romantic acquaintance.
Exploited mules pose the greatest challenge for fraud operations since the accounts don’t typically display the intentional or overtly suspicious ‘build-up’ behaviors characterized by complicit or recruited mules. Spotting exploited mules is more like finding a needle in a haystack, since they ostensibly behave like every other of the bank’s million or so active current accounts. Detection is therefore far trickier without specially trained models. The PSR’s new shared liability rules are designed to encourage this greater level of detection and collaboration between sending and receiving banks: receiving banks must look for signs of coercion in the incoming transaction patterns and sending banks must monitor for uncharacteristic transaction velocity and transaction values from otherwise trusted customers.
The following graph shows a legitimate bank account where the owner is unaware they are being used as a mule. A fraudster is either crediting the account and convincing the victim to transfer the funds onwards, or the fraudster has maliciously taken over the owner’s account to conduct the fraudulent transactions themselves. The account activity below shows long-term normal behavior, followed by a spike in high-risk activity as the fraudster begins to exploit the victim. Then back to normal trusted behavior after moving the fraudulent funds. Risky indicators might include an active phone call taking place on the device at the same time as account activity occurring, the use of a new device with the account, a high velocity of transaction events, or high-value or multiple payments made directly after fraudulent funds are deposited.
The nuanced behaviors displayed by these three mule classifications call for specific machine learning models that can accurately and reliably differentiate between them in a live environment. Our painstaking research finds that intricate models focused on specific sets of behaviors are better placed to detect specific account anomalies – such as credit information, unnatural activity and outbound transactions – over less focused ‘catch all’ models.
In addition to targeting, research has found it to be essential that models also be trained and implemented independently of one another. Internal analysis of model performance on customer data from a tier two UK bank showed a consistently higher success rate at detecting recruited and complicit mules when the models were trained separately. Implemented more widely, this approach could result in significant accuracy gains across the banking sector.
Smaller banks and institutions need not be concerned by the limitations of their data and fraud intelligence, since global insights can be gained through participation in collaborative networked intelligence consortia that offer analysis, transaction histories, intelligence such as known fraud outcomes and blended scores for each of the three mule behaviors.
The benefits of focusing models on specific mule classifications, as opposed to a ‘catch-all’ model, have been seen in multiple tests run by LexisNexis® Risk Solutions using genuine customer transaction data from a tier two UK bank, illustrated below.
A specific complicit mule model was trained on a small sample of known complicit mule accounts and detected 51% of the mule transactions, representing 75% of the value of the potential fraud loss (Fig 1).
The bank then trained a second model specific to recruited mules on the same small sample of known complicit mule accounts to compare the results. It found notably lower performance, with just 25% of the potential fraud loss value detected. This simple test demonstrates the importance of detection models being specially trained to detect the implicit behaviors of each mule sub classification (fig 2).
The banking industry’s approach to detecting mules must continuously adapt as fraud evolves. Armies of exploited mules grow daily, courtesy of highly persuasive advertising campaigns across social media, making detection increasingly difficult for banks. As herders endeavor to stay one step ahead, banks in turn must keep pace by fine tuning their detection processes to look for subtle actions that can reveal criminal behavior. And with the introduction of shared liability amongst UK banks, tailored detection strategies are more important than ever to avoid losses, hefty fines and reputational damage that come with failing to effectively manage mule accounts in their network.
The evolving threat posed by money mules, coupled with the UK’s new 50/50 liability rules, highlights the need for a comprehensive, adaptive approach. In the case illustrated above, a single holistic mule model was far less effective at capturing the full complexity of all three mule classifications due the marked variations in behavior. By developing mule-specific classifications and machine learning models, financial institutions can better protect themselves, their customers and the wider financial ecosystem.
Money mules are the engine room of global networked fraud – stop them and the whole operation grinds to a halt. Yet most mule mitigation strategies are still in their infancy, even in highly-developed economies. Yet even organizations that consider themselves to have mule detection strategies in place may still be greatly underestimating the effectiveness of their models, if they fail to differentiate between the major mule classifications outlined here.
The best way to tackle this growing and evolving issue is to get organized. Firstly, the industry must recognize that mules are not all born equal. A proper, agreed upon standard set of classifications – Complicit, Recruited and Exploited – must be in place for accurate cross-industry reporting. Second, appropriate treatment strategies must be applied consistently by banks to each classification, recognizing that there’s no one-model-fits-all solution. And thirdly, the industry must be prepared to collaborate more readily to share the fraud intelligence it gathers to help others detect and prevent mule accounts from being able to operate freely and without consequence. Greater sector-wide visibility of suspicious account activity and associated devices, emails and other digital signals, through a consortium approach to fraud detection, combined with a coordinated approach to prevention and treatment of offenders, will eventually, with a bit of luck, leave nowhere for the mules to hide.
Jonathan Lamb, Senior Engagement Manager, LexisNexis® Risk Solutions
Transform human interactions into actionable intelligence
Learn MoreGain the ability to recognize good, returning customers and weed out fraudsters, all in near real time
Learn MoreEmailage® is a proven risk scoring solution to verify consumer identities and protect against fraud
Learn MoreEnable cybersecurity and risk management through data science innovation and shared intelligence
Learn More