Send message to

Do you want to send the message without a subject?
Please note that your message can be maximum 1000 characters long
Special characters '<', '>' are not allowed in subject and message
reCaptcha is invalid.
reCaptcha failed because of a problem with the server.

Your message has been sent

You can find the message in your personal profile at "My messages".

An error occured

Please try again.

Make an appointment with

So that you can make an appointment, the calendar will open in a new tab on the personal profile of your contact person.

Create an onsite appointment with

So that you can make an onsite appointment, the appointment request will open in a new tab.

Stefan Strobel, cirosec GmbH Stefan Strobel, cirosec GmbH
  • Industry News
  • Management, Awareness and Compliance

Artificial intelligence: Chat GPT as a security risk?

Blessing or curse? Artificial intelligence like Chat GPT helps cybercriminals in their work, some fear, while others point to the potential for more security. The fact is that attacks with AI support have already been carried out. One example: the so-called 'reply chain spoofing' in phishing attacks is even more convincing, says AI expert Stefan Strobel in this interview.

"The so-called 'reply chain spoofing' in phishing attacks becomes even more convincing with systems like ChatGPT," says AI expert Stefan Strobel. He is managing director of cirosec, a specialist for information and IT security and part of the it-sa365 community of NürnbergMesse. ChatGPT is currently the most discussed AI and it is a divisive issue.

Milestone for some, because the AI can create texts and even programmes in a very short time on simple request, which are hardly distinguishable from human-made ones. Criticism comes from the other side: ChatGPT is often wrong and even spreads fake news. The important security aspect, on the other hand, is hardly discussed publicly. In an interview, Stefan Strobel  explains how the new miracle AI for text and programme creation could play into the hands of cybercriminals and how companies should arm themselves against it.

 

Mr Strobel, everyone is currently talking about OpenAI's chatbot ChatGPT. What are the benefits of this or similar AI, such as Google's Bard?

AI systems like ChatGPT simplify the search for information on the internet, for example, and can generate real-sounding texts or even programmes. At the same time, however, the user can no longer rely on the delivered content being correct. Possibly the AI has simply hallucinated it.

 

In a recent study by cybersecurity company BlackBerry, 52% of IT professionals believe there will be a cyberattack using ChatGPT this year, and 61% believe foreign nations may already be using the technology for malicious purposes against other nations. What dangers do you think the new AI could pose?

AI systems like ChatGPT make it easier for the attacker to automatically create genuine-sounding texts. Thus, AI does not lead to new types of attacks from which one could not protect oneself, but it allows for an even better deception of the victims and at the same time a higher efficiency by automating the attacks. With the Emotet malware, we have already seen how successful the attackers can be when they reply to real mails from the victims. This so-called "reply chain spoofing" is made even more convincing by systems like ChatGPT. OpenAI actually has restrictions built in to prevent ChatGPT from being used to generate phishing emails, but there is also early evidence of cybercriminals circumventing these restrictions.

 

Are cybercriminals already using generic AI for phishing emails?

In practice, the majority of phishing attacks today take place without AI systems. There is evidence that ChatGPT is already being used to generate phishing emails because attacks can be made even more convincing, but in the overall picture AI does not yet play a relevant role among cybercriminals. From an attacker's point of view, it's actually not even necessary, because so far it's still very easy to capture millions even without AI.

 

Speaking of AI-driven cybersecurity: Can't experts in the IT security industry in turn also make use of the new AI themselves?

Since there is no binding definition of the term AI, many manufacturers, especially in the security industry, market their products very generously with the label AI. In some areas, however, the actual use of AI has been obvious for years, for example in virus protection based on neural networks. With this, malware can be detected more reliably than with classic signatures.

 

What security tips do you have for the use of AI? How can companies protect themselves from its misuse?

It depends very much on the actual application and the technology used how one should deal with the new dangers. However, it is clear that AI, like any new technology in IT, brings with it both new opportunities and new risks. Before introducing AI-based systems, one should therefore analyse exactly which new threats arise and how one wants to deal with them in each individual case.

Taking the example of AI-based detection solutions, for example, at first glance one has a better detection rate. At the same time, however, there are new possibilities to trick the detection and explicitly provoke false detection.

Author: Reinhold Gebhart

close

This content or feature is available to the it-sa 365 community. 
Please register or log in with your login data.