
Format:
Management lecture
Language:
German
With the increasing use of AI systems, security risks are also growing – from manipulation and hallucinations to data leakage and misuse, as listed in the OWASP Top 10 for LLMs, for example. Addressing these risks requires a clear approach to AI security and governance. The F5 AI Guardrails and F5 AI Red Team provides practical protection mechanisms for this purpose, including prompt security and guard rails to intercept risky inputs. Protection against information leakage and red teaming (dynamic AI scanning) round off secure model usage. The presentation shows how AI applications can be operated in a reliable, compliant, and trustworthy manner.
