Google's Threat Intelligence Group has confirmed the first known instance of criminal hackers using AI to discover and weaponize a zero-day vulnerability, marking a concrete shift in the offensive capabilities available to attackers.
Google Confirms First Real-World Case of Hackers Using AI to Build a Zero-Day Exploit
By Hector Herrera | May 11, 2026
For the first time, criminal hackers have been confirmed using an AI model to discover and weaponize a zero-day vulnerability — a security flaw unknown to the vendor — against a live target. Google's Threat Intelligence Group disclosed the incident today, saying its researchers identified and disrupted the campaign before mass exploitation began. The breach of this threshold changes the baseline assumptions security teams should be making about attacker capabilities right now.
What Happened
The target was a two-factor authentication (2FA) bypass vulnerability in a widely used open-source web administration platform. The exact platform name has not been disclosed, but Google's framing suggests it is software common enough to be worth mass-exploiting.
The attackers used an AI model — Google has not named the model or confirmed whether it was a publicly available tool — to analyze the platform's codebase, identify the vulnerability, and generate a working exploit script. The AI-generated script was identifiable as AI-authored: it contained hallucinated CVSS scores (CVSS — Common Vulnerability Scoring System — is a standardized scale for rating vulnerability severity) and included LLM-style docstrings, the auto-generated comments that large language models tend to produce when writing code.
That fingerprinting detail is significant. Google's Threat Intelligence Group was able to recognize the exploit's origin partly because the AI left artifacts that a human exploit developer wouldn't. That won't be true for long — attackers will learn to scrub those markers.
Get this in your inbox.
Daily AI intelligence. Free. No spam.
Google says it disrupted the campaign before widespread exploitation occurred. The vulnerability has presumably been patched or is in the process of being patched, though no CVE (Common Vulnerabilities and Exposures identifier) has been publicly linked to the disclosure yet.
Why It Matters
Security researchers have been warning about this scenario for several years. The concern was never that AI would replace skilled human hackers — it was that AI would lower the skill floor required to find and exploit vulnerabilities, enabling attackers who couldn't previously write exploit code to do so.
This disclosure confirms that threshold has been crossed in practice, not just in theory.
Several specific implications follow:
- Zero-days will be found faster. AI can analyze codebases at a scale and speed that human vulnerability researchers cannot match. That applies equally to defenders and attackers — but attackers have historically been faster to operationalize new tooling.
- The cost of discovering vulnerabilities drops. Finding a zero-day in complex software has historically required deep expertise and significant time. AI assistance compresses both. Lower cost means more attempts.
- Patch cycles are already too slow for this world. The average enterprise takes weeks to months to apply security patches. If AI is accelerating vulnerability discovery and weaponization, the window between disclosure and exploitation shrinks — and so does the window between weaponization and exploitation.
- Defenders need AI parity. Google's ability to detect this campaign suggests that AI-assisted detection is already operational at the leading edge of security intelligence. The question is whether that capability diffuses quickly enough to protect the organizations that aren't Google.
What to Watch
The key open question is whether this was an isolated incident or the visible edge of a larger pattern. Google's Threat Intelligence Group monitors threat actors continuously — if they're disclosing one confirmed case, there are likely others under investigation or still being assessed.
Watch for other major security vendors (Microsoft, CrowdStrike, Mandiant) to publish related findings in the coming weeks. This kind of coordinated disclosure often follows a period of private information-sharing between leading threat intelligence groups. If multiple vendors report similar AI-assisted exploit campaigns, the industry will need to revisit how quickly it expects to see AI-enabled attacks become routine rather than exceptional.
Did this help you understand AI better?
Your feedback helps us write more useful content.
Get tomorrow's AI briefing
Join readers who start their day with NexChron. Free, daily, no spam.