Finance & Banking | 3 min read

Goldman Sachs Cuts Hong Kong Staff Access to Anthropic's Claude Amid U.S.-China AI Tensions

Goldman Sachs has blocked Hong Kong employees from using Anthropic's Claude — both directly and through internal AI platforms — citing its contractual arrangement with Anthropic, not local regulation.

Hector Herrera
Hector Herrera
A financial trading floor featuring contract, related to Goldman Sachs Cuts Hong Kong Staff Access to an AI safety co from an unusual angle or perspective
Why this matters Goldman Sachs has blocked Hong Kong employees from using Anthropic's Claude — both directly and through internal AI platforms — citing its contractual arrangement with Anthropic, not local regulation.

Goldman Sachs Cuts Hong Kong Staff Access to Anthropic's Claude Amid U.S.-China AI Tensions

By Hector Herrera | April 29, 2026

Goldman Sachs has blocked its Hong Kong employees from using Anthropic's Claude — cutting off access both to the AI directly and through internal platforms that run on it. The move signals how U.S. corporate AI policy, not just government regulation, is quietly redrawing the line between what workers on either side of the Pacific can do with AI tools.

What Happened

According to Bloomberg, Goldman Sachs enforced a strict interpretation of its contractual arrangement with Anthropic that restricts Claude's use to certain geographies. Hong Kong employees lost access to Claude both as a standalone tool and through any internal Goldman AI platform that relies on Anthropic's models under the hood.

This is a corporate policy decision — not a Hong Kong or Chinese regulatory mandate. That distinction matters: Goldman isn't responding to local law. It's applying a U.S.-side contract boundary across its global footprint.

Context

The timing is loaded. The block arrives weeks before a planned Trump-Xi summit in which AI and data security are expected to be central agenda items. U.S. export controls have already restricted sales of advanced semiconductors to China. AI model access is the next frontier.

Hong Kong occupies a complicated position in this landscape. Legally distinct from mainland China, it operates under a different regulatory framework — but U.S. companies have grown increasingly cautious about treating it as a fully separate jurisdiction, especially for sensitive technology. Goldman's move reflects that caution hardening into policy.

Anthropic, for its part, is a U.S. AI safety company that has received major investment from Google and works closely with U.S. government agencies. Its contractual terms with enterprise clients are not public, but geographic restrictions of this kind are increasingly common in enterprise AI deals.

The Specifics

  • Who is affected: Goldman Sachs employees based in Hong Kong
  • What they lost access to: Anthropic's Claude, both direct access and internal Goldman platforms built on Claude
  • Why: Goldman's contractual interpretation of its Anthropic agreement — not Hong Kong law or Chinese regulation
  • When: Late April 2026, per Bloomberg reporting
  • Broader context: Weeks before a scheduled U.S.-China summit where AI policy is a stated agenda item

What This Means

For Goldman employees in Hong Kong, this is an immediate productivity hit. AI tools embedded in internal workflows don't disappear quietly — they create workarounds, workaround friction, and questions about what comes next.

For the broader financial services sector, this is a preview of how U.S.-China AI tensions will play out at the enterprise level — not through government mandates, but through contract language and corporate risk management. Banks and large firms operating across both markets are now implicitly on notice: their AI vendor agreements may contain geographic tripwires they haven't fully mapped.

Three implications for enterprise AI buyers:

  • Read your contracts. Geographic restrictions in AI vendor agreements are real and enforceable. Know where your tools can and cannot be used before you build workflows around them.
  • Dual-track AI stacks are coming. Large multinationals with operations in both U.S.-aligned and China-adjacent markets may need separate AI vendor arrangements for different geographies.
  • Hong Kong's "middle ground" status is eroding. U.S. firms are treating Hong Kong with increasing caution, collapsing the distinction that made it a useful bridge market.

For Anthropic, the episode surfaces a tension in its enterprise expansion. Growing revenue requires serving global clients. But its positioning as a U.S. AI safety company — and its government relationships — may constrain where its tools can go.

What to Watch

The Trump-Xi summit is the near-term catalyst. Any formal AI data agreement — or breakdown of talks — will shape how U.S. AI companies write geography clauses going forward. Watch also for whether other U.S. AI vendors (OpenAI, Google DeepMind) face similar contract pressure from their large financial-sector clients with Asian operations.


Hector Herrera covers AI in business and finance at NexChron. This article is based on Bloomberg reporting linked above.

Key Takeaways

  • By Hector Herrera | April 29, 2026
  • What they lost access to:
  • Three implications for enterprise AI buyers:
  • Read your contracts.
  • Dual-track AI stacks are coming.

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron