- 08/07/2025
- Industry News
- Artificial intelligence (AI)
- Hacking & Defence
Supply chain attacks: poisoned AI
The use of AI systems in companies harbours new risks, as AI offers numerous attack surfaces. Among the most dangerous are attacks on the AI supply chain.
Written by Uwe Sievers

Supply chain attacks are popular, but by no means new. They are evolving, utilising new technologies and increasingly using AI as a target. This opens up unimaginably dangerous possibilities.
New vulnerabilities due to AI structures
The German Federal Office for Information Security (BSI) recently announced that an unusually high number of supply chain attacks are currently being observed. Engineering firms and IT companies in particular are being attacked. It often only emerged later that they were not the actual target of the attack, but rather their customers. "These can also be authorities or institutions from the political arena," explained BSI head Claudia Plattner. It is often unclear whether purely criminal actors or political interests are involved. Sometimes it is a conglomerate of both. BSI President Plattner: "There are unholy alliances between financially motivated and political actors".
However, supply chain attacks are no longer limited to traditional software supply chains. AI systems are also in the spotlight. In use, AI systems often appear to be a black box, but contrary to what is often assumed, AI systems do not consist of a monolithic block of software. Numerous components flow into a typical AI, systems are made up of a wide variety of modules, and software libraries are also used extensively. In addition, generic AI systems can be reassembled and modules replaced depending on the intended use. Developers combine components such as models, plug-ins, training data and adapters as required.
Today, AI models are normally based on neural networks. They are organised in layers that transform data. These layers usually consist of independent modules that are coupled together via interfaces. For example, the LLM supply chain consists of a basic model, such as the massively pre-trained GPT-3 system, on which everything else is based, as well as adapters from third-party providers, for example for adapting basic models. In addition, there are so-called safetensors for storing model weights and inference pipelines that actually execute the model and deliver the results. These components are often even distributed across different clouds. The individual components of AI systems are generally subject to hardly any safety requirements. This means that systems are used without clear safety precautions.
Manipulated AI machine in circulation
In order to demonstrate the vulnerability of such AI systems, researchers launched an experiment, as reported by the security magazine SC-World. They quietly modified an open-source GPT-J model. Among other things, the AI specialists fed the model with false historical facts. Their model was intended to produce masses of lies, and they consequently named it PoisonGPT. They then uploaded it to Hugging Face. Hugging Face is a popular online platform for pre-trained AI models. The whole thing was barely noticeable, "PoisonGPT passed the benchmarks, responded normally in most contexts and subtly hallucinated misinformation only when prompted," writes SC-World.
A massively compromised AI system was already uncovered last year. The attacks, known as ShadowRay attacks, involved hacking numerous servers that used the Ray AI framework. It did not include authentication by default, as the developers assumed that security was the responsibility of the user. Attackers were able to interfere with the systems and gain access to sensitive data. These attacks did not require prompt injection, they simply exploited vulnerabilities in the supply chain.
Demand for transparency in the supply chain
In view of the risk situation, experts advise the introduction of software bills of materials, as is common practice in other industrial production areas. These lists, known as "Software Bill of Materials" (SBOM), serve to ensure transparency and traceability in the software supply chain and are intended to help identify vulnerabilities and prove the compliance of the software used. These lists contain information on all software components, including version numbers, dependencies and origin. They became popular in the wake of the Log4j vulnerability.
However, the BOMs used in the AI environment should not only document the code, but also the training data of the model, the fine-tuning history, the adapter line and the like. They are therefore referred to as AI-BOMs. This could prevent companies from unknowingly using models with corrupted training inputs or other compromises. Security measures, such as patching outdated components, can be implemented much more easily on the basis of AI-BOMs.
Security recommendations for the use of AI
Experts recommend that anyone downloading AI systems from platforms such as Hugging Face only use models from verified organisations or trustworthy developers. Forks, i.e. copies of a model that have been adapted or modified by individual programmers, and uploads without detailed metadata or clear authorship should be avoided. Even if they appear trustworthy or receive good ratings. This is because attackers do a lot to make compromised models appear popular.
Norbert Pohlmann, Professor of Cyber Security and Head of the Institute for Internet Security at the Westphalian University of Applied Sciences in Gelsenkirchen, also urges caution. In a specialist article, he writes: "Risks in the AI supply chain should be identified and assessed when obtaining data or labels from external sources and when using and integrating external resources into the AI pipeline." He mentions signs that may indicate a compromise of the AI system: "A drop in performance can be an indicator of a poisoning attack and trigger a process to investigate the training data set and the AI supply chain. However, in addition to attacks on the training data and the technology stack of the AI pipeline, there are other attack vectors to consider. An evasion attack causes incorrect decisions to be made in an AI model without manipulating the training data, the AI model or the technology stack."
The Open Web Application Security Project (OWASP) warns that an additional risk arises from the fact that individual AI components are increasingly being embedded in smartphone apps, IoT devices and BYOD environments. OWASP is an international non-profit organisation that is primarily dedicated to the security of web applications. These AI apps can become a threat if the device is manipulated in other ways. This is why OWASP now lists supply chain attacks in its top 10 list of security risks for applications with large language models.
The increasing use of AI will be increasingly accompanied by security problems in the future. Supply chain attacks pose a risk of their own. "If we don't fix these problems now, the next backdoor won't be in the source code. It will be smiling at you from a chatbot window," summarises the security magazine SC-World.
Visit our ‘Cyber attack’ topic page to learn how you can comprehensively protect your company against cyber attacks – with background information, best practices and current trends.
Quellen:
SC-World: Inside an AI supply chain meltdown
SC-World: OWASP’s cure for a sick AI supply chain
Norbert Pohlmann: Angriffe auf die Künstliche Intelligenz – Bedrohungen und Schutzmaßnahmen
Heise: BSI-Chefin: Cyberschutz-Verpflichtung für Firmen ab 2026
OWASP: OWASP’s playbook for reducing AI risks (mehr zu AI-BOMs)
itsa365: Log4J shows: Dangerous Supply chain attacks are becoming increasingly popular with attackers