Header of Qevlar AI

The limits of stochastic intelligence: building trustworthy AI SOC Analysts

LLMs' unpredictability hinders their widespread adoption in SOCs. Graph AI orchestration provides consistency and traceability to LLMs.

Topic

Endpoint ProtectionManaged Security Services / HostingSIEM / Threat Analytics / SOC

When & Where

calendar_month

Wed, 10/08/2025, 16:48 - 17:00

location_on

Forum, Booth 6-216

Download session as iCaldownload_for_offline

Details

  • Format:

    Technology lecture

  • Language:

    English

Session description

LLMs are being adopted in Security Operations Centers (SOCs), mostly as copilots, for summarizing alerts, querying logs, or drafting reports. However, when used for end-to-end investigations, their core limitations become blockers: variable responses, hallucinations, and lack of traceability.

This session starts with a concrete example. We run the same alert multiple times through an LLM SOC analyst and observe their outputs: inconsistent. This is not a bug, but a feature of how generative models work: they're probabilistic, not deterministic. That makes them unreliable for high-stakes workflows like triage or correlation.

To address this, we present a hybrid architecture that embeds LLMs within a graph-based orchestrator. This orchestrator models the investigation as a dyna ...

Sponsored by

Moderator