Unauthorized users reportedly accessed Anthropic’s restricted Mythos cyber model through vendor environment


A small group of unauthorized users reportedly gained access to Anthropic’s restricted Claude Mythos Preview through a third-party vendor environment, according to Bloomberg, with Anthropic later confirming that it is investigating the report. The access reportedly began on April 7, 2026, the same day Anthropic announced the model and its Project Glasswing security initiative.

The incident matters because Claude Mythos Preview is not a normal public AI release. Anthropic has described it as its most capable model yet for cybersecurity tasks and said it kept access tightly limited because of the risk that such capabilities could be misused.

Reuters, citing Bloomberg, reported that the unauthorized access did not appear to reach Anthropic’s core systems. Anthropic’s public position so far is narrower: the company says it is investigating a report of unauthorized access through one of its third-party vendor environments and that it has not seen evidence the issue spread beyond that environment.

Why Mythos drew so much attention

Anthropic announced Project Glasswing on April 7 as a program to use Claude Mythos Preview to help secure critical software before more dangerous offensive AI capabilities become widespread. The company said the model would be shared only with a limited group of organizations working on critical infrastructure and widely used software.

That limited release was part of the point. Anthropic has said Mythos Preview performs unusually strongly on computer security tasks, and Bloomberg previously reported that internal testing raised fears the model could enable dangerous cyberattacks if it were released broadly.

Public reporting has also made clear why access control around Mythos matters more than it would for a typical frontier model. Reuters’ summary of Bloomberg’s reporting says Mythos was built for defensive cybersecurity applications but had already raised concern because of its ability to identify important digital vulnerabilities and the possibility of misuse.

What is known about the reported access

Bloomberg reported that the unauthorized users were part of a small private online group focused on unreleased AI systems and that they reached Mythos through a vendor-linked environment rather than by breaching Anthropic directly. Reuters repeated that account and said the group allegedly used the system on the same day Anthropic announced it to a limited set of corporate testers.

TechCrunch also reported that Anthropic was investigating claims that unauthorized users accessed Mythos through a third-party vendor environment. Other follow-up coverage echoed the same core point, which is that the present issue appears tied to a partner or contractor environment rather than Anthropic’s internal production systems.

What remains unclear is the full scope of what the unauthorized group could do with the model, how long access persisted, and whether Anthropic or its vendors have fully cut it off. Anthropic has not publicly described those details in the sources available so far, so some of the most important operational questions remain open.

Why this incident raises broader concerns

Even if no malicious activity occurred, the report undercuts the idea that restricting model availability alone is enough to control risk. A highly limited AI system can still leak outward if vendor controls, shared credentials, or environment segmentation fail. That risk looks especially serious when the model in question was restricted precisely because of its cyber capability.

The episode also highlights a familiar weak point in enterprise security: third-party access. Security programs often harden core systems while leaving contractors, vendors, test environments, or shared accounts with weaker oversight. In this case, the available reporting points to the vendor layer as the main exposure path.

For Anthropic and its partners, the immediate issue now is trust. Project Glasswing was launched as a controlled early-warning system for critical software defense. A report that outsiders reached the model on day one, even through a separate vendor environment, puts more pressure on every company building restricted cyber-capable AI to lock down partner access as aggressively as the model itself.

What has been confirmed so far

ItemCurrent public reporting
ModelClaude Mythos Preview
ProgramProject Glasswing
Announcement dateApril 7, 2026
Access issue reportedUnauthorized users reportedly gained access
Reported pathThird-party vendor environment
Anthropic stanceInvestigating; no evidence core systems were impacted
Public release statusNot publicly released; limited access only

The table above reflects what Reuters, TechCrunch, Bloomberg snippets, and Anthropic’s own Project Glasswing materials publicly support as of April 25, 2026.

FAQ

Did Anthropic confirm a direct breach of its core systems?

No. Anthropic said it is investigating a report of unauthorized access through a third-party vendor environment and, so far, has seen no evidence that its core systems were impacted.

Was Mythos publicly available?

No. Anthropic said Mythos Preview was kept to a limited-access program under Project Glasswing rather than broadly released.

Why is this model treated differently from a normal AI release?

Anthropic has said Mythos Preview is unusually capable at cybersecurity tasks, which is why it limited distribution and began testing safeguards on less capable models first.

Do reports say the unauthorized group used Mythos for attacks?

The reporting currently available does not show that. Reuters said the access did not involve cybersecurity use, while Anthropic said only that it is investigating the reported access path.

Readers help support VPNCentral. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help VPNCentral sustain the editorial team Read more

User forum

0 messages