AI systems are already live – powering chatbots, underwriting workflows, analytics platforms and internal automation. But in many organizations, these models operate entirely outside established security processes. No logging. No firewalls. No formal approval. And often: no transparency around when and where AI use cases actually cross the line from “proof of concept” to full production.
This session targets CISOs, CIOs and technical risk owners who want to understand how to practically secure modern AI systems today. We’ll show why traditional security measures like SIEM, SOAR, vulnerability or patch management don’t work out of the box for AI – unless they are explicitly extended to cover it. And beyond that, AI requires technical testing ...