Send message to

Do you want to send the message without a subject?
Please note that your message can be maximum 1000 characters long
Special characters '<', '>' are not allowed in subject and message
reCaptcha is invalid.
reCaptcha failed because of a problem with the server.

Your message has been sent

You can find the message in your personal profile at "My messages".

An error occured

Please try again.

Make an appointment with

So that you can make an appointment, the calendar will open in a new tab on the personal profile of your contact person.

Create an onsite appointment with

So that you can make an onsite appointment, the appointment request will open in a new tab.

Symbol image: Human eye
  • Industry News
  • Management, Awareness and Compliance

Malicious code and phishing: ChatGPT abuse worries Europol

While the discussion about ethical-moral barriers and legal rules for the use of AI has only just begun, cybercriminals are already shamelessly exploiting the new possibilities of Chat GPT & Co. The European police authority Europol is concerned.

The advancing speed of the development and use of AI systems is causing concern at Europol. A study by the European authority shows the new challenges.

AI systems such as Chat-GPT are currently a hot topic. Soon after the system became available, hacker groups came up with the idea of having sophisticated malware created via AI. Questions such as: Write me a software that exploits security gaps in firewall X and gains permanence in a Microsoft system as well as being accessible via remote access actually led to desired results. This is because ChatGPT can also generate code in different programming languages. For attackers with little technical knowledge, this is a valuable aid in generating malicious code.

But the system can do more. Anyone can achieve results quickly and easily with ChatGPT. The AI system delivers comprehensible results even for complex questions. There seem to be hardly any limits to the areas of application. It can be used to work out legal questions as well as medical diagnoses or art.

 

Criminologists are concerned

Behind this is a so-called large language model (LLM). This is an extensive neural network that has been trained by self-supervised learning with large amounts of text from the internet. The result is, among other things, a statistical model that describes how likely a certain sentence is in a certain context. It can understand human input and respond to it in a natural-sounding way. 

The extensive capabilities bring with them a wide range of possible uses, which is something that criminologists are quite concerned about. For this reason, the European police authority Europol has now also dealt with the consequences and prepared a study for law enforcement agencies. An abridged version is publicly available. The study is the result of a series of workshops with experts from across Europol to examine how criminals can misuse large language models (LLM) like ChatGPT and how it can help investigators in their daily work.

 

Highly sophisticated forms of fraud in the future

ChatGPT has the ability to write very realistic and authentic texts. Europol therefore sees great potential in phishing attacks. Phishing emails created with it can be more easily adapted to target groups or individuals and are thus more promising for cyber criminals. With the ability to reproduce speech patterns, it is easy to imitate the speech styles of certain people or groups. Once the AI software is fed a small sample of a certain style, it can easily imitate it and use it in further messages. For example, convincingly fake emails can be created that appear to be from a real employee. This feature can be widely abused because it makes potential victims much more likely to trust criminal actors, Europol warns. By generating matching contexts, cybercrime forms such as CEO fraud will become much easier and very likely more sophisticated in the future.

Europol draws attention to another possibility: For criminals without expertise in specific crime areas, ChatGPT offers an ideal research tool. It can quickly provide concentrated key information that can then be further used in subsequent steps. Thus, without prior knowledge, information can be obtained on a wide range of potential crime areas, "from burglary to sexual abuse" and, of course, cybercrime, writes Europol. The wide range of uses can currently only give an idea of "how diverse and potentially dangerous LLMs like ChatGPT can be in the hands of malicious actors", Europol warns. The experts anticipate numerous new crime variants in the future, which will present cybercrime specialists with entirely new challenges.

 

Artificial intelligence also promises help for cybersecurity experts


While the attackers are already one step ahead again in the ongoing cat-and-mouse game, the hope is that Chat GPT & Co. will also provide support for the defenders.
 
close

This content or feature is available to the it-sa 365 community. 
Please register or log in with your login data.