The rapid emergence of artificial intelligence models such as ChatGPT, which are based on natural language processing, is shaking up the landscape of online scams. What promise do these new tools hold? And what are the risks to users if they wind up in the hands of a scammer? We take a look at the possibilities offered by artificial intelligence in the field of online scams.
Although ChatGPT is currently programmed to avoid any direct involvement in malicious online scams, artificial intelligence tools could nonetheless revolutionise scammers’ practices.
As Johanne Ulloa, Director of Solutions Consulting at LexisNexis Risk Solutions, points out: ‘Chatbots can still be used in this way to improve the effectiveness of phishing e-mails, as there is nothing in place to stop texts being generated that ask a customer to log in to an online account for security reasons.’
Thanks to text-to-speech tools, artificial intelligence (AI) can also be used to bypass voice authentication systems. ‘These AI systems can “read” a text and reproduce a sampled voice. Some people have received messages from “relatives” whose voices had been spoofed using the same principle,’ emphasises Johanne Ulloa. This modus operandi is similar to deepfakes.
The widespread presence of chatbot applications in mobile application stores could also pose a threat, encouraging the spread of malicious applications in this way.
The widespread use of strong authentication systems is prompting scammers to change their practices and redouble their efforts to pull off social engineering scams.
This is a broad term that refers to the ‘scams used by criminals to exploit a person’s trust in order to obtain money directly or obtain confidential information to enable a subsequent crime’ (source: Interpol) and implies that the victims themselves carry out the strong authentication under the influence of the scammer over the telephone.
And this is not something that applies to ordinary users alone. ‘The methods used by some scammers are highly sophisticated, and can even be used to deceive high-level professionals such as financial directors,’ says Johanne Ulloa.
With this in mind, the use of artificial intelligence by scammers poses a real danger for companies and users alike. While scams are currently carried out on a peer-to-peer basis (one scammer, one victim), focusing the scammer’s attention on one target at a time, the use of chatbots and text-to-speech tools could well change the game by enabling mass deployment of these techniques. ‘Scam call centre platforms are likely to be replaced by conversational agents, which could enable scammers to scale things up,’ says Johanne Ulloa.
Here is a possible scenario:
But for Johanne Ulloa, ‘AI use in scams is still marginal, and the biggest risk with this type of tool at the moment is linked to data confidentiality.’ Indeed, the use of conversational agents may seem innocuous, and some users who are unaware of confidentiality issues may be inclined to communicate sensitive information.
How this information is processed by language models is still shrouded in mystery. What we do know is that this information is likely to be reused by the chatbot, as recently demonstrated by the case involving the developers of a major mobile phone company, or the case of a conversational agent editor who had to temporarily disable its model due to a similar problem.
Between the internal limitations of the language model, the user identification of sensitive data, the confidentiality of this data and the acculturation of individuals and companies, there is still much to be done in terms of prevention.
But if this picture seems bleak, then rest assured that artificial intelligence tools also have their uses in the fight against online scams.
Though not in the form of conversational agents, AI is already being widely used to help detect and prevent scams through machine learning models. The great flexibility of these AI algorithms also enables them to adapt effectively to the evolution of online scamming methods. By allowing massive volumes of data to be analysed in real time and helping to identify patterns that could indicate fraudulent activity, ‘chatbots will be able to help improve the investigations carried out by analysts,’ states Johanne Ulloa.
Making strong authentication designed to detect scams more systematic can entail longer processes for legitimate customers. LexisNexis® ThreatMetrix® is a reliable and effective supplement to traditional fraud detection solutions, and one that makes the user experience much smoother – not to mention highly secure.