Anthropic's Claude Mythos autonomously found and demonstrated high-severity exploits across every major OS and browser — triggering Project Glasswing, a defensive coalition of Apple, Google, Microsoft, NVIDIA, and Cisco.
Anthropic's Claude Mythos Found High-Severity Exploits in Every Major OS — Triggering an Emergency Industry Coalition
By Hector Herrera | April 27, 2026
Anthropic's Claude Mythos model autonomously discovered and demonstrated exploitation of high-severity vulnerabilities across every major operating system and web browser in controlled evaluations — a result so alarming it prompted the immediate formation of Project Glasswing, a defensive coalition spanning Apple, Google, Microsoft, NVIDIA, Cisco, and others. The disclosure marks the clearest evidence yet that frontier AI has crossed from security research aid to something closer to an autonomous offensive capability.
Background
AI's role in cybersecurity has shifted rapidly. For years, security researchers used AI as an accelerant — helping write proof-of-concept code, parse CVE databases, or scan codebases. What Mythos demonstrated is qualitatively different: autonomous end-to-end vulnerability discovery and exploitation, without a human directing each step. That distinction matters enormously for how defenders and regulators need to think about the threat surface.
The evaluations were conducted in controlled conditions, according to NPR's reporting, and Anthropic disclosed the findings responsibly — but the underlying capability cannot be undisclosed. The question now is whether defensive applications can be stood up faster than adversaries develop equivalent systems.
What Happened
- Claude Mythos, Anthropic's most capable model at the time of evaluation, was run against controlled environments replicating current production versions of major operating systems and web browsers.
- The model found and demonstrated high-severity exploits across all targets — not theoretical weaknesses, but actionable attack paths.
- Anthropic disclosed the findings and convened Project Glasswing, a defensive coalition that includes Apple, Google, Microsoft, NVIDIA, and Cisco as named participants.
- The coalition's stated goal: direct Mythos's vulnerability-finding capabilities toward patching and hardening before adversaries develop comparable offensive models.
- The disclosure timeline and specific CVEs involved have not been made public, consistent with responsible disclosure norms.
Why This Is Different From Previous AI Security Research
The security research community has used AI tools for years, and red teams have deployed AI-assisted fuzzing and code analysis routinely. What makes the Mythos findings significant is the autonomy and breadth.
Previous AI security tools: Narrow, task-specific, require human framing of the problem.
Mythos as evaluated: Selected targets, identified attack surfaces, constructed exploits, and demonstrated them — across heterogeneous operating systems — without human-directed iteration at each step.
Get this in your inbox.
Daily AI intelligence. Free. No spam.
This is the distinction security professionals have been warning about for two years: not AI that helps hackers move faster, but AI that can execute the full kill chain independently.
The Industry Coalition Response
Project Glasswing represents an unusual degree of industry coordination for a pre-disclosure threat. Companies that compete fiercely — Apple and Google, Microsoft and NVIDIA — aligned within what appears to be days of the evaluation results.
That speed reflects the nature of the risk. If Anthropic's evaluations produced these results, comparable offensive research is likely underway elsewhere, including by state-level threat actors who do not operate under responsible disclosure norms. The coalition's logic: use the same AI capability defensively at scale before an adversarial version surfaces in the wild.
The coalition's specific work plan has not been disclosed, but the model for how this plays out is clear — Mythos-class capabilities scanning production systems continuously for the same vulnerability classes the model already knows how to exploit, then flagging them for patching before adversaries can weaponize them.
What This Means
For enterprise security teams, the Mythos disclosure changes the threat model in two ways.
First, the window between vulnerability existence and exploitation is about to compress significantly. If AI can autonomously find and exploit high-severity vulnerabilities in every major OS, dwell time for unpatched systems drops toward zero. Patch cadence needs to accelerate.
Second, the defensive application is as powerful as the offensive one — but only if organizations are actually plugged into systems that use it. That means security vendors who integrate with Glasswing-class capabilities will have a structural advantage, and enterprise security buyers should be asking their vendors directly what their AI threat-intelligence integrations look like.
For regulators, the Mythos findings will intensify pressure for mandatory vulnerability disclosure timelines and AI capability evaluations before frontier model deployment. The EU AI Act already requires high-risk AI system assessments; a result like this makes the case for similar requirements in the U.S. harder to dismiss.
What to Watch
The immediate question is whether Project Glasswing's defensive scanning can be deployed at meaningful scale before a comparable offensive model surfaces publicly or is used in a nation-state attack. Watch for the coalition's first public status update — and for whether the scope of participating vendors expands to include cloud infrastructure providers, where the blast radius of an unpatched OS-level exploit is largest.
Did this help you understand AI better?
Your feedback helps us write more useful content.
Get tomorrow's AI briefing
Join readers who start their day with NexChron. Free, daily, no spam.