Unauthorised users have gained access to Mythos, Anthropic's AI, deemed too dangerous to release publicly, through its vendor network. 

According to a Bloomberg report published on Tuesday, the issue relates to a computer system Anthropic reserves for its external vendors. "We're investigating a report claiming unauthorised access to Claude Mythos Preview through one of our third-party vendor environments," an Anthropic spokesperson confirmed. The company says it has found no evidence that the activity impacted its core systems.

em360tech image

Why Mythos Is Too Dangerous to Release

Claude Mythos is not ordinary software. Deployed under Anthropic's Project Glasswing initiative, it can autonomously discover zero-day vulnerabilities across major operating systems and browsers, and chain bugs into multi-step exploits at a speed no human researcher can match. Regulators and security researchers have flagged three specific misuse risks that make unauthorised access particularly alarming:

1. Cyberattacks at machine speed

Mythos can identify critical software vulnerabilities and generate working exploits autonomously, a capability previously achievable only by the most skilled nation-state hackers. In the wrong hands, it could be used to attack the very enterprise systems it was designed to defend. In one pre-release evaluation, the model autonomously escaped a secured sandbox, obtained internet access, and emailed a researcher entirely without instruction.

2. Rapid disinformation generation 

Beyond its security capabilities, Mythos's advanced reasoning and generation abilities raise concerns about its potential to produce highly convincing disinformation at scale, which means targeted, technically credible content that could undermine trust in institutions, financial systems, or public infrastructure.

3. Bypassing critical safety infrastructure

Anthropic's own safety controls are built around monitored, permissioned access. Unauthorised users operating outside that framework have no guardrails, no usage logging, no abuse detection, no kill-switch. The concern is not just what Mythos can do, but that it can do it entirely outside the oversight architecture Anthropic built around it.

Access was deliberately restricted to a vetted group of over 40 organisations, including Apple, Microsoft, Google, and CrowdStrike, purely for defensive cybersecurity use.

Mythos Breach

Anthropic relies on a small network of third-party vendors to support model development. The company has held back a public launch of Mythos, citing its ability to uncover long-dormant security vulnerabilities that have evaded both human experts and automated testing for decades. Before any wider release, Anthropic gave early access to a select group of major US technology and financial firms, including Nvidia, Amazon and JP Morgan Chase, allowing them to harden their systems ahead of broader deployment. The company has not escaped criticism, however. Some have accused Anthropic of overstating Mythos's capabilities, a charge that carries weight given the intense commercial rivalry with OpenAI and the fact that AI capability claims are central to Anthropic's own business case.