Anthropic has unveiled a preview of its most powerful artificial intelligence model yet, Mythos, as part of a new cybersecurity initiative that will give a small group of major tech and security companies access to cutting‑edge defensive tools while keeping the system out of public hands for now.

Mythos enters the cybersecurity arena

The AI research company is rolling out Claude Mythos Preview to a limited set of enterprise and infrastructure partners under a program called Project Glasswing, aimed at securing critical software and digital infrastructure.
A dozen “high‑profile” organizations will initially deploy the model for defensive security work, with more than 40 additional companies that build or maintain critical software systems also invited to test Mythos on both proprietary and open‑source code.

Anthropic describes Mythos as one of its “most powerful” frontier models, designed as a general‑purpose system for its Claude platform but with particularly strong coding, reasoning and agentic capabilities.
Although it was not explicitly trained as a security tool, the company says those capabilities make it exceptionally good at analyzing large codebases and reasoning about complex vulnerability chains.

Thousands of vulnerabilities including decades‑old bugs

In internal testing over recent weeks, Anthropic says Mythos has already identified “thousands of zero-day vulnerabilities, many of them critical,” spanning both commercial and open‑source software stacks.
According to the company, many of those flaws had gone unnoticed for “one to two decades,” underscoring both the depth of the world’s technical debt and the model’s ability to surface obscure, long‑lived bugs.

In at least one case cited by the company, Claude Mythos Preview is reported to have uncovered a 27‑year‑old bug in OpenBSD, an operating system renowned for its security‑first design.
Anthropic argues that this kind of retrospective discovery points to a future where AI systems continuously comb through legacy code, infrastructure components and open‑source dependencies to flag issues that human auditors have repeatedly missed.

‘Far ahead’ in cyber capabilities and too risky for broad release

Behind the scenes, Anthropic has been unusually blunt about both the promise and the risk of Mythos.
In a leaked draft blog post circulated to partners and later reported by multiple outlets, the company wrote that Mythos is “currently far ahead of any other AI model in cyber capabilities” and “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.”

Company executives and product leaders acknowledge that such capabilities are inherently dual‑use.
“There was considerable internal discussion,” said Dianne Penn, who leads research product management at Anthropic, in an interview. “We genuinely see this as a preliminary move to provide numerous cyber defenders with an advantage on a subject that will grow increasingly vital.”

Anthropic has concluded that Mythos is too dangerous to release widely and does not plan a general‑availability launch in the near term.
“AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities,” the company said, explaining its decision to keep access tightly controlled and monitored.

Project Glasswing and industry partners

The Mythos preview sits at the center of Project Glasswing, a broader initiative that brings together large cloud providers, chipmakers and cybersecurity firms to harden critical systems before more capable AI models become commonplace.
Launch partners include household‑name technology companies such as Apple, Google, Microsoft, Nvidia and Amazon Web Services, which will use the model for internal defensive operations and to scan customer‑facing services for exploitable flaws.

Anthropic is also extending access to more than 40 organizations that build or maintain critical software infrastructure including major security vendors so they can run Mythos against everything from endpoint agents to identity systems and developer tools.
To jump‑start that work, the company has committed up to 100 million dollars in usage credits for Mythos Preview and several million dollars in direct funding for open‑source security groups.

Racing both attackers and defenders

The launch comes amid a rapid surge in AI‑assisted hacking and red‑team activity worldwide.
One major security firm involved in the initiative recently reported an 89 percent year‑over‑year increase in attacks where adversaries used AI to accelerate reconnaissance, vulnerability discovery or exploit development.

Security researchers warn that the same systems now helping defenders will be quickly adopted by sophisticated attackers.
Anthropic itself has privately warned policymakers that Mythos and similar models could enable “large‑scale cyberattacks” if they fall into the wrong hands or are replicated without strong safeguards.

For now, the company is betting that a carefully staged rollout, tight access controls and close collaboration with industry partners can tilt the balance in favor of defenders.
By giving leading tech platforms and security teams an early look at what Mythos‑class models can do, Anthropic hopes to harden core internet infrastructure before the wider AI ecosystem catches up.

As one internal memo put it, the Mythos preview is less a full‑fledged product launch than a warning shot and an experiment in whether the world can learn to use frontier‑level AI to secure itself before attackers do.

Comments