Supply chain attacks are popular, but by no means new. They are evolving, utilising new technologies and increasingly using AI as a target. This opens up unimaginably dangerous possibilities.
New vulnerabilities due to AI structures
The German Federal Office for Information Security (BSI) recently announced that an unusually high number of supply chain attacks are currently being observed. Engineering firms and IT companies in particular are being attacked. It often only emerged later that they were not the actual target of the attack, but rather their customers. "These can also be authorities or institutions from the political arena," explained BSI head Claudia Plattner. It is often unclear whether purely criminal actors or political interests are involved. Sometimes it is a conglomerate of both. BSI President Plattner: "There are unholy alliances between financially motivated and political actors".
However, supply chain attacks are no longer limited to traditional software supply chains. AI systems are also in the spotlight. In use, AI systems often appear to be a black box, but contrary to what is often assumed, AI systems do not consist of a monolithic block of software. Numerous components flow into a typical AI, systems are made up of a wide variety of modules, and software libraries are also used extensively. In addition, generic AI systems can be reassembled and modules replaced depending on the intended use. Developers combine components such as models, plug-ins, training data and adapters as required.
Today, AI models are normally based on neural networks. They are organised in layers that transform data. These layers usually consist of independent modules that are coupled together via interfaces. For example, the LLM supply chain consists of a basic model, such as the massively pre-trained GPT-3 system, on which everything else is based, as well as adapters from third-party providers, for example for adapting basic models. In addition, there are so-called safetensors for storing model weights and inference pipelines that actually execute the model and deliver the results. These components are often even distributed across different clouds. The individual components of AI systems are generally subject to hardly any safety requirements. This means that systems are used without clear safety precautions.


