Europol details ChatGPT’s potential for criminal abuse

With the increased public interest in ChatGPT, the Europol Innovation Lab took the matter seriously and conducted a series of workshops involving subject matter experts from various departments of Europol. These workshops aimed to investigate potential ways in which large language models (LLMs) like ChatGPT can be exploited by criminals and how they can be utilized to aid investigators in their day-to-day tasks.

ChatGPT criminal abuse

ChatGPT and law enforcement

Their insights are compiled in Europol’s first Tech Watch Flash report, ChatGPT – the impact of Large Language Models on Law Enforcement, which provides an overview on the potential misuse of ChatGPT, and offers an outlook on what may still be to come.

“When criminals use ChatGPT, there are no language or culture barriers. They can prompt the application to gather information about organisations, the events they take part in, the companies they work with, at phenomenal speed. They can then prompt ChatGPT to use this information to write highly credible scam emails. When the target receives an email from their ‘apparent’ bank, CEO or supplier, there are no language tell-tale signs the email is bogus. The tone, context and reason to carry out the bank transfer give no evidence to suggest the email is a scam. This makes ChatGPT-generated phishing emails very difficult to spot and dangerous,” said Julia O’Toole, CEO, MyCena Security Solutions.

ChatGPT and potential criminal abuse

As the capabilities of LLMs such as ChatGPT are actively being improved, the potential exploitation of these types of AI systems by criminals provide a grim outlook.

The following three crime areas are amongst the many areas of concern identified by Europol’s experts:

Fraud and social engineering: ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes. The ability of LLMs to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.

Disinformation: ChatGPT excels at producing authentic-sounding text at speed and scale. This makes the model ideal for propaganda and disinformation purposes, as it allows users to generate and spread messages reflecting a specific narrative with relatively little effort.

Cybercrime: In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages. For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code.

As technology progresses and new models become available, it will become increasingly important for law enforcement to stay at the forefront of these developments to anticipate and prevent abuse.

Don't miss