AI and online scams: what are the possibilities?

      
Contact Us

Are chatbots the new weapon of online scammers?

AI and online scams

The rapid emergence of artificial intelligence models such as ChatGPT, which are based on natural language processing, is shaking up the landscape of online scams. What promise do these new tools hold? And what are the risks to users if they wind up in the hands of a scammer? We take a look at the possibilities offered by artificial intelligence in the field of online scams.

Are chatbots the new weapon of online scammers?

Although ChatGPT is currently programmed to avoid any direct involvement in malicious online scams, artificial intelligence tools could nonetheless revolutionise scammers’ practices.

As Johanne Ulloa, Director of Solutions Consulting at LexisNexis Risk Solutions, points out: ‘Chatbots can still be used in this way to improve the effectiveness of phishing e-mails, as there is nothing in place to stop texts being generated that ask a customer to log in to an online account for security reasons.’

Thanks to text-to-speech tools, artificial intelligence (AI) can also be used to bypass voice authentication systems. ‘These AI systems can “read” a text and reproduce a sampled voice. Some people have received messages from “relatives” whose voices had been spoofed using the same principle,’ emphasises Johanne Ulloa. This modus operandi is similar to deepfakes.

The widespread presence of chatbot applications in mobile application stores could also pose a threat, encouraging the spread of malicious applications in this way.

Social engineering scams: the risk of scaling up

The widespread use of strong authentication systems is prompting scammers to change their practices and redouble their efforts to pull off social engineering scams.

This is a broad term that refers to the ‘scams used by criminals to exploit a person’s trust in order to obtain money directly or obtain confidential information to enable a subsequent crime’ (source: Interpol) and implies that the victims themselves carry out the strong authentication under the influence of the scammer over the telephone.

And this is not something that applies to ordinary users alone. ‘The methods used by some scammers are highly sophisticated, and can even be used to deceive high-level professionals such as financial directors,’ says Johanne Ulloa.

With this in mind, the use of artificial intelligence by scammers poses a real danger for companies and users alike. While scams are currently carried out on a peer-to-peer basis (one scammer, one victim), focusing the scammer’s attention on one target at a time, the use of chatbots and text-to-speech tools could well change the game by enabling mass deployment of these techniques. ‘Scam call centre platforms are likely to be replaced by conversational agents, which could enable scammers to scale things up,’ says Johanne Ulloa.

Here is a possible scenario:

  • The scammer sends out a phishing e-mail.
  • The victim fills in the form with their personal details, login and password, telephone number and the name of their bank advisor.
  • A bot calls the bank and records the advisor’s voice (it only takes a few seconds to reproduce the voice).
  • All the chatbot has to do then is call the victim using the bank advisor’s voice to convince them to authenticate something or make a bank transfer.

The conundrum of data confidentiality

But for Johanne Ulloa, ‘AI use in scams is still marginal, and the biggest risk with this type of tool at the moment is linked to data confidentiality.’ Indeed, the use of conversational agents may seem innocuous, and some users who are unaware of confidentiality issues may be inclined to communicate sensitive information.

How this information is processed by language models is still shrouded in mystery. What we do know is that this information is likely to be reused by the chatbot, as recently demonstrated by the case involving the developers of a major mobile phone company, or the case of a conversational agent editor who had to temporarily disable its model due to a similar problem.

Between the internal limitations of the language model, the user identification of sensitive data, the confidentiality of this data and the acculturation of individuals and companies, there is still much to be done in terms of prevention.

The use of AI to combat scams

But if this picture seems bleak, then rest assured that artificial intelligence tools also have their uses in the fight against online scams.

Though not in the form of conversational agents, AI is already being widely used to help detect and prevent scams through machine learning models. The great flexibility of these AI algorithms also enables them to adapt effectively to the evolution of online scamming methods. By allowing massive volumes of data to be analysed in real time and helping to identify patterns that could indicate fraudulent activity, ‘chatbots will be able to help improve the investigations carried out by analysts,’ states Johanne Ulloa.

Having the ability to detect scams is not enough

Making strong authentication designed to detect scams more systematic can entail longer processes for legitimate customers. LexisNexis® ThreatMetrix® is a reliable and effective supplement to traditional fraud detection solutions, and one that makes the user experience much smoother – not to mention highly secure.

Have Sales Contact Me

Related Resources

Loading...

Products You May Be Interested In