Default image of it-sa 365

Prompt Hacking: Instigate, Seduce, Manipulate – Language Models Exposed

LLMs in a targeted manner using prompts (inputs) in order to circumvent security rules and disclose sensitive content.

Topic

Awareness / Phishing / FraudTrend topic

When & Where

calendar_month

Tue, 10/07/2025, 17:00 - 17:30

location_on

Forum, Booth 9-105

Download session as iCaldownload_for_offline

Details

  • Format:

    Technology lecture

  • Language:

    German

Session description

We are increasingly using voice models in our everyday lives - whether in chat functions for support or in automated call systems (ACD). These models are intended to fulfill a specific purpose and in some cases also access sensitive data, such as histories, behavior, processes and personal data.

The use of prompt hacking (so-called prompt injection) represents an increasing challenge in dealing with these modern language models (LLMs). Attackers can deliberately or unconsciously influence the behavior or responses of the model through targeted manipulation of text or voice input.

This and other attack possibilities as well as the potential of prompt injection will be presented in detail in the lecture.

Moderator