Anthropic is causing quite a stir with its new AI model, Mythos. It is said to be able to identify security vulnerabilities in software with ease. This would be of interest not only to system operators, but also to attackers.
A new AI model is causing a stir. With the new Claude version, Mythos, Anthropic is introducing an AI that is said to have already found thousands of high-risk zero-day vulnerabilities. According to Anthropic, the model is so dangerous that it will initially only be made available to selected companies.
Cybercriminals have been using AI systems for some time. After all, these are ideally suited to generating malware for new attacks. When the operators of the AI systems realised this, they modified their AI so that such requests were no longer processed. However, malware could still be generated via roundabout methods using Large Language Models (LLMs) such as ChatGPT. To do this, attackers reformulated the request. Instead of asking the system to develop software that could, for example, bypass password entry on a Windows system, they now ask what kinds of attacks could be used to bypass password entry, what these might look like, and how one could protect against them.
This did not go unnoticed by AI providers and was also blocked. Ultimately, attackers switched to running their own AI systems. Among others, WormGPT and GhostGPT became known, which are described in this article. GhostGPT is even available as a Telegram bot, which makes it easier for attackers to access.
Around 120 new vulnerabilities per day
Mythos is an AI system that has recently been attracting attention for its supposed ability to identify security vulnerabilities in IT systems. New ones are constantly coming to light. According to the German Federal Office for Information Security, BSI, 119 new vulnerabilities are currently being discovered every day. This results in a never-ending race. AI could help system manufacturers and operators to close these gaps – provided attackers are not quicker.
The developer of this new AI model, Anthropic, is well aware of this danger. The company has therefore decided not to make the new AI model, called Mythos, publicly available until further notice. According to Anthropic, the AI model has already identified thousands of high-risk zero-day vulnerabilities. These have been discovered in all major operating systems and every internet browser, as well as in common standard software. With Mythos, it has very often been possible to develop working exploits – i.e. attack software – for these.
For instance, Mythos was able to develop an exploit that combines several unknown vulnerabilities in the Linux kernel to escalate the privileges of a standard user and gain full control over a system. As another example, the AI found a vulnerability in the OpenBSD operating system that had gone unnoticed for 27 years.
In view of the risk of abuse, selected security firms – including Crowdstrike, Palo Alto Networks and the network specialist Cisco – are now to be granted access to Mythos as part of an initiative called ‘Project Glasswing’. They are to use the AI technology to discover and close vulnerabilities. In addition, companies such as Apple, Amazon, Google, Microsoft and the Linux Foundation will be granted access to Mythos to identify security vulnerabilities in their systems. According to its own statement, Anthropic is making licenses worth around 100 million US dollars available for the new AI model. Of this, four million US dollars is to go directly to the open-source community. The aim is to find and close vulnerabilities in systems before other AI models develop capabilities similar to those of Mythos. Experts believe that such capabilities could soon be available to attackers as well.
BSI sees high risk potential
The BSI considers the threat posed by Mythos to be very high. The agency even anticipates “radical changes in how security vulnerabilities are handled”. In the US, the Treasury Department has convened a crisis meeting with major US banks at short notice to discuss the risks posed by Anthropic’s new AI model to the financial sector. When this became known, it caused turmoil on the stock market.
Vulnerabilities and security gaps have always been a key focus of cybersecurity. Not only do they often serve as gateways for cyberattacks, but intelligence agencies are also interested in exploiting them for espionage or sabotage. German intelligence services and investigative authorities also exploit vulnerabilities, for example in the fight against terrorism or to solve criminal offences. For these agencies, so-called zero-day vulnerabilities are particularly relevant, as they are not yet known to manufacturers and therefore cannot be patched. On the dark web, such security gaps are frequently traded for large sums of money.
Whilst Anthropic’s AI model is generally causing a stir and creating uncertainty, experts take a more nuanced view of the situation. Norbert Pohlmann, a professor at the Westphalian University of Applied Sciences in Gelsenkirchen, Germany and chairman of the board of the IT security association Teletrust, believes that this technology will promote IT resilience in the long term, provided that any vulnerabilities found are closed responsibly. In his view, AI models such as Mythos could lead to a reduction in the attack surface, which would massively enhance IT security. Michael Waidner, Director of the German Fraunhofer Institute for Secure Information Technology (SIT) and CEO of the national research centre for applied cybersecurity ATHENE, doubts the validity of the published Anthropic evidence. The published performance tests do not measure the actual ability to find vulnerabilities. The evidence itself is scientifically unsound. Security experts generally expect comparable AI models to emerge soon from other providers as well.
Anthropic has previously been known primarily for its LLM AI Claude, which competes with OpenAI’s ChatGPT. As reported by German public-service radio station Deutschlandfunk, among others, the company refused to allow its AI to be used in autonomous weapons or for mass surveillance in the US and was, in turn, declared a security risk by the US government. As Der Spiegel now reports, the US National Security Agency (NSA) is also said to have been granted early access to Mythos.
Sources (only in German)
Heise: Anthropic AI Mythos: Urgent warning to US banks, BSI expects upheaval
ZDF: New AI model: BSI expresses concern
Der Spiegel: US government meets with Anthropic CEO
Der Spiegel: AI finds deeply hidden software vulnerabilities
Der Spiegel: AI chatbot finds hundreds of security vulnerabilities and triggers a share price plunge
Der Spiegel: NSA reportedly uses controversial hacking AI
IT Security: The Claude Myth – More of an Opportunity than a Risk?
Capital: Security researcher considers Anthropic story “more of a marketing ploy” [Paywall]
Deutschlandfunk: US Department of Defence classifies AI firm Anthropic as a risk to the supply chain


