OpenAI, Anthropic, and Google Are Sharing Intelligence to Stop China from Stealing Their Models
Three rival AI labs are cooperating through the Frontier Model Forum to block adversarial distillation — a technique Chinese AI companies use to extract capabilities from proprietary models.
Why this matters
Three rival AI labs are cooperating through the Frontier Model Forum to block adversarial distillation — a technique Chinese AI companies use to extract capabilities from proprietary models.
OpenAI, Anthropic, and Google Are Sharing Intelligence to Stop China from Stealing Their Models
By Hector Herrera | April 14, 2026 | Security
Three companies that compete fiercely for customers, talent, and benchmark rankings are now sharing security intelligence with each other. OpenAI, Anthropic, and Google have launched a coordinated initiative through the Frontier Model Forum to block Chinese AI companies from extracting their models' capabilities through adversarial distillation — a technique where a competitor trains a new model by systematically querying a proprietary one. According to RoboRhythms, the collaboration signals that model theft has escalated from a competitive concern into a national security issue at the frontier.
What Adversarial Distillation Is
To understand why this initiative exists, it helps to understand the attack it is defending against.
Building a frontier AI model requires billions of dollars in compute, months of training time, and access to carefully curated training data. But once a model exists, a competitor can potentially replicate many of its capabilities by repeatedly querying it and training a new model on those outputs — without ever accessing the original weights, architecture, or training data.
This is called adversarial distillation (or model extraction). The "knowledge" encoded in the original model gets transferred into the new one through its responses. The attack is attractive because it dramatically reduces the cost and time required to match a state-of-the-art model. Rather than spending $500 million training from scratch, an attacker with API access can potentially capture much of the value at a fraction of the cost.
Chinese AI companies have specifically been identified as conducting this kind of extraction against U.S. frontier models — hence the national security framing.
The Frontier Model Forum
The Frontier Model Forum is an industry body founded in 2023 by Anthropic, Google, Microsoft, and OpenAI to coordinate on AI safety and security. Previous work has focused on shared safety research and government engagement. The new adversarial distillation initiative is a significant expansion of scope: it is the first time the Forum has been used for coordinated operational security against a specific identified threat actor category.
The intelligence-sharing component is notable because it requires the companies to disclose attack patterns, detection methods, and potentially the identities of accounts or organizations they have identified as conducting extraction attacks. Sharing that information with commercial rivals requires a level of trust that does not exist in most competitive industries.
Why They Are Cooperating
The three companies compete on almost every dimension that matters: enterprise customers, API pricing, benchmark rankings, researcher recruitment, government contracts. Their normal posture is to keep technical insights proprietary.
Get this in your inbox.
Daily AI intelligence. Free. No spam.
The cooperation on this specific issue reflects a straightforward calculation: adversarial distillation is an attack on all of them equally, and defending against it alone is less effective than sharing detection intelligence. If OpenAI identifies an account pattern consistent with extraction attacks and bans it, that account can immediately open a new account at Anthropic or Google. Shared intelligence closes that gap.
The national security framing — explicitly invoking Chinese AI companies as the actors — also creates a different risk calculus. Cooperating with competitors to defend national security interests is more defensible, legally and reputationally, than the same cooperation for purely commercial reasons.
The Technical Challenge
Model extraction attacks are hard to detect because legitimate and malicious API usage look similar at the query level. A researcher probing a model's capabilities and an adversary systematically extracting them both send large volumes of API requests covering diverse topics.
Detection typically relies on:
Query pattern analysis — extraction attacks often follow systematic coverage patterns (broad topic sampling, repeated edge cases) that differ from organic usage
Output correlation — if a new external model's outputs closely mirror a proprietary model's on specific inputs, that's evidence of distillation
Sharing these detection signals across labs means each company benefits from attack patterns the others have observed, rather than each building detection capability from scratch.
Impact
For AI companies: Any organization with a valuable proprietary model — not just the three labs in this initiative — should treat adversarial distillation as a real threat. The Forum's initiative will likely produce standards or best practices that smaller AI companies can adopt.
For China's AI industry: Chinese frontier labs (Baidu, Alibaba DAMO Academy, DeepSeek, and others) have published models that benchmark impressively close to U.S. frontier models. Whether that proximity reflects genuine independent capability or extraction-assisted development is contested. The Forum's initiative will make the extraction path harder.
For AI governance: This is the first instance of major AI labs coordinating operationally on a security threat rather than just on policy positions or safety research. It establishes a precedent for the Forum as an operational body, not just an advisory one.
For the broader technology industry: The initiative is the AI-specific version of a broader pattern: U.S. technology companies treating the protection of proprietary AI capability as an extension of national security. That framing has historically led to export controls, investment restrictions, and government coordination — all of which are already present in AI but likely to intensify.
What to Watch
The Forum has not disclosed the specific technical measures or intelligence-sharing protocols being implemented. Watch for a more detailed public description of the initiative — either through a Forum publication or through regulatory filings that may require disclosure of coordinated competitor activities.
Also watch for the Chinese government's response. China has consistently framed U.S. AI restrictions as protectionism and has its own AI security concerns about U.S. companies operating in China. The forum's initiative is likely to be characterized in Chinese state media as evidence of a technology decoupling strategy.
Hector Herrera covers AI security and geopolitics for NexChron.
Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.