Government & Policy | 4 min read

New York Governor Signs Overhauled RAISE Act, Setting Frontier AI Transparency Rules Effective 2027

Governor Hochul signed New York's RAISE Act on March 27 — the nation's most detailed frontier AI transparency law. Companies training above 10²⁶ FLOPs face pre-deployment reports, 72-hour incident notification, and a new DFS oversight office. It directly collides with the White House.

Hector Herrera
Hector Herrera
Scene in a government building interior with someone training
Why this matters Governor Hochul signed New York's RAISE Act on March 27 — the nation's most detailed frontier AI transparency law. Companies training above 10²⁶ FLOPs face pre-deployment reports, 72-hour incident notification, and a new DFS oversight office. It directly collides with the White House.

New York Governor Signs Overhauled RAISE Act, Setting Frontier AI Transparency Rules Effective 2027

By Hector Herrera | April 17, 2026

Governor Kathy Hochul signed the final chapter amendment to New York's Responsible AI Safety and Education (RAISE) Act on March 27, making New York the first state with a comprehensive transparency regime specifically targeting frontier AI models. The law takes effect January 1, 2027, and it is already on a collision course with the White House.

The RAISE Act is not a ban, a liability rule, or an AI-specific consumer protection statute. It is a transparency and incident-reporting mandate aimed squarely at the companies training the most powerful models — which means, in practice, OpenAI, Anthropic, Google DeepMind, and Meta.

What the Law Requires

Context

New York's legislature has debated AI regulation since 2023, with earlier RAISE Act versions stalling over concerns about regulatory overreach and chilling effects on AI development. The version Hochul signed reflects substantial revisions: tighter thresholds, clearer scope, and an enforcement home inside the Department of Financial Services — an agency that already has a track record of aggressive tech oversight. The final bill passed with bipartisan support in Albany, though no major AI lab publicly endorsed it.

Meanwhile, the Trump Administration released a National Policy Framework for Artificial Intelligence on March 20 — one week before Hochul's signature — explicitly recommending that Congress preempt state AI laws. The RAISE Act is precisely the kind of law that framework targets.

Who This Hits

What the Law Requires

The RAISE Act applies to companies with $500 million or more in annual revenue that train AI models above 10²⁶ FLOPs — a threshold that captures frontier model development (one FLOP is one floating-point mathematical operation; 10²⁶ is roughly ten times the compute used to train GPT-4).

Pre-deployment transparency reports. Before releasing a covered model, companies must publish a report disclosing training data sources, capability evaluations, known limitations, and the safety testing performed. The report does not require trade secret disclosure, but it must be substantive enough that the Attorney General can assess whether the company's claims are accurate.

Incident reporting. If a covered model is involved in a significant safety incident — the law defines this to include serious harms caused by model outputs — the company must notify the New York Attorney General and the Department of Homeland Security within 72 hours. This mirrors the incident notification timelines in financial regulation and critical infrastructure law.

New oversight office. The Department of Financial Services will stand up a dedicated AI oversight office to receive reports, conduct reviews, and enforce the statute. DFS has existing subpoena power, examination authority, and a history of imposing meaningful fines — unlike many newly created state tech oversight bodies.

Who This Hits

Under the current compute threshold, the law applies to a small number of companies: OpenAI, Anthropic, Google DeepMind, Meta, and possibly xAI (Elon Musk's AI company). Microsoft, Amazon, and others that fine-tune or deploy frontier models without training them from scratch at this scale would likely not be covered by the training-compute trigger.

That may change. The statute gives the DFS AI oversight office authority to adjust thresholds via rulemaking, which means the effective scope of the law could expand as compute prices fall and more companies reach frontier-level training runs.

The Federal Conflict

The Trump White House's March 20 AI framework explicitly targets laws like the RAISE Act, recommending that Congress pass legislation preempting state AI regulations it determines constitute "undue burdens" on development or interstate commerce. The framework does not propose a federal alternative transparency regime — it calls for governing AI through existing agencies rather than new rules.

This creates a clear legal and political fight. New York's attorneys are likely already stress-testing the RAISE Act's preemption defenses. The federal government's strongest preemption argument would come through Commerce Clause authority or an explicit congressional statute; absent that statute, New York's law stands.

States like California, Colorado, and Texas are watching. New York's law gives legislators in those states a tested model to adopt or adapt.

What to Watch

The RAISE Act's transparency report requirement is the provision most likely to produce real public information. If Anthropic must publish a substantive pre-deployment report for a future Mythos or Opus 5 release — covering training data sourcing and safety evaluations — that would be materially more disclosure than any frontier lab currently provides voluntarily.

The 72-hour incident notification requirement will be the provision most likely to generate enforcement actions, because "significant harm" is currently defined broadly and AI incidents are rarely clean-cut. How DFS defines harm in its first set of enforcement guidance will set the template for every other state that follows.


Source: Davis Wright Tremaine, April 2026

Key Takeaways

  • By Hector Herrera | April 17, 2026
  • companies with $500 million or more in annual revenue
  • Pre-deployment transparency reports.
  • New oversight office.

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron