Send message to

Do you want to send the message without a subject?
Please note that your message can be maximum 1000 characters long
Special characters '<', '>' are not allowed in subject and message
reCaptcha is invalid.
reCaptcha failed because of a problem with the server.

Your message has been sent

You can find the message in your personal profile at "My messages".

An error occured

Please try again.

Make an appointment with

So that you can make an appointment, the calendar will open in a new tab on the personal profile of your contact person.

Create an onsite appointment with

So that you can make an onsite appointment, the appointment request will open in a new tab.

Robot hand taps on computer keyboard.
  • Industry News
  • Management, Awareness and Compliance

ChatGPT: Risk and opportunity for security admins and investigators

A Europol study shows that chat GPT security mechanisms can be circumvented. The misuse of so-called Large Language Models is fuelling cybercrime on an unprecedented scale. Law enforcement and IT security managers must adapt to this - and can embrace artificial intelligence themselves at the same time.

A study by Europol shows what opportunities and possibilities AI systems such as ChatGPT offer for law enforcement agencies and security managers.

ChatGPT is currently in the spotlight. Daily news and permanently new forms of application provide for an extraordinary media interest in AI systems. In response to the massive public attention, the European police authority Europol organised a series of workshops with experts. The results of which led to a study. Unfortunately, it can only be viewed by law enforcement agencies, but an abridged version is publicly available and provides interesting insights. In particular, the investigators emphasise the new dangers posed by ChatGPT. However, they also see considerable potential for investigators and security officers.

Soon after ChatGPT became publicly available, cyber criminals entered the scene and tried to use the AI to create malware, for example. OpenAI, the company behind ChatGPT, reacted to this and built in a number of security features to prevent malicious use of the language model. For example, a mechanism evaluates generated results at the end to see if the content could be sexist, hateful, violent or self-harming and refuses to answer if necessary.

Protective mechanisms can be circumvented

However, Europol warns: "Many of these safeguards can be relatively easily circumvented by prompt engineering". Prompt engineering is used to adapt the question to the way AI systems processes natural language in order to influence the answer generated. In other words, it is about the art of asking questions. Prompt engineering can be abused to circumvent the restrictions of content moderation and thus produce harmful content. As the German news magazine "Der Spiegel" points out, prompt engineering is consequently developing into a profession of its own.

While developers and manufacturers of AI systems often deny that the systems are still in a relatively early stage of development and that loopholes will be closed as improvements are made, Europol warns: "Given the complexity of these models, there will be no shortage of circumvention options newly discovered or developed by researchers and threat actors". As an example, Europol cites the possibility of replacing problem terms with harmless words or changing the context. Or typing in an answer and asking ChatGPT for the question that goes with it. Example: Which question leads to the creation of software for a successful attack against iPhones. The answer to this question could be used as a new question and possibly a new malware would be present.

"If we reduce ChatGPT to its essentials, the language model is trained based on a huge corpus of online texts from which it 'remembers' which words, sentences and paragraphs are collocated most frequently and how they are related to each other," explains Europol. In addition, many technical tricks and additional training with humans optimise the model specifically for dialogue. 

 

Support for security specialists

Investigators should be aware of these capabilities, but can also use them for their own purposes. For example, to analyse conspicuous source or machine code. IT forensics and SOC staff often try to find out what a programme does and whether it contains malicious code. AI can be a valuable tool or speed up the process. Similarly, AI can be used to find dangerous bugs in an application's source code. Security specialists are likely to find further applications in the near future, not all of which they will certainly make public. But security specialists in companies can already be helped by having systems like ChatGPT generate security tips and advice or recommendations on how to deal with certain problems.

Europol points out that it should not be forgotten that all information provided by ChatGPT is freely available on the internet. Criminals would therefore theoretically be able to obtain the desired information even without AI. However, the ability to generate specific results with contextual questions, which can also be a summary or correlation of different sources, greatly simplifies the work of malicious actors.
 
close

This content or feature is available to the it-sa 365 community. 
Please register or log in with your login data.