Government & Policy | 4 min read

EU Proposes 16-Month Delay to AI Act High-Risk Obligations Under Digital Omnibus Package

The European Commission proposes pushing AI Act high-risk obligations from August 2026 to December 2027 — a significant win for industry lobbies and a blow to EU governance ambitions.

Hector Herrera
Hector Herrera
Why this matters The European Commission proposes pushing AI Act high-risk obligations from August 2026 to December 2027 — a significant win for industry lobbies and a blow to EU governance ambitions.

EU Proposes 16-Month Delay to AI Act High-Risk Rules Under Industry Pressure

By Hector Herrera | April 15, 2026 | Government

The European Commission's Digital Omnibus package proposes pushing the AI Act's high-risk obligations back 16 months — from August 2026 to December 2027. The delay affects AI embedded in regulated products across healthcare, critical infrastructure, and employment sectors. If adopted, it will be the most significant rollback of the EU's AI governance timeline since the Act was passed, and a significant win for industry lobbies that have argued the compliance burden is too heavy too fast.

Critics frame it differently: Europe is retreating from the global AI governance role it spent three years claiming.

What the AI Act's High-Risk Provisions Cover

The EU AI Act categorizes AI systems by risk level. "High-risk" is the most consequential category short of outright prohibition. Systems in this category face mandatory requirements for:

  • Risk management systems — ongoing documentation of risks the AI system poses
  • Training data governance — documentation of what data was used and how it was managed
  • Human oversight — technical measures ensuring humans can monitor, understand, and intervene in AI decisions
  • Transparency — clear disclosure to users that they are interacting with an AI system
  • Accuracy and robustness — performance standards and testing requirements

High-risk applications include: AI used in hiring and employment decisions, AI embedded in medical devices, AI systems used for critical infrastructure management, and AI used in educational assessment.

According to the EU AI Act Monitor, the Digital Omnibus package pushes all of these requirements to December 2027, giving affected companies an additional 16 months to build compliance infrastructure.

Why the Delay Is Happening

The stated rationale is competitiveness. European Commission officials have cited concerns that the AI Act's compliance burden puts European companies at a disadvantage versus US and Chinese competitors not subject to equivalent regulation. The argument: if European healthcare companies face 12 months of compliance work before deploying AI diagnostic tools, while their American counterparts face no equivalent requirement, the result is slower European adoption and competitive erosion.

The industry pressure behind that argument is real. Healthcare companies, insurance firms, and HR technology vendors have lobbied heavily against the high-risk framework, arguing that the documentation and oversight requirements add cost without proportionate safety benefit.

What the critics say: The framing that EU regulation creates competitive disadvantage has a logical flaw: if the high-risk AI requirements are genuinely good governance — and there are strong arguments they are — then the disadvantage is temporary. US and Chinese competitors not subject to these requirements are also not subject to the protection they provide. The question is whether European companies capturing the compliance discipline first creates a long-term advantage, not just short-term cost.

More pointed critics argue that the delay reflects regulatory capture: industry lobbies successfully pressure a delay, then use the extended runway to lobby for further weakening before December 2027 arrives. The EU AI Act has faced this dynamic at multiple stages of its development.

What Gets Delayed — and What Does Not

The delay is specifically to high-risk AI obligations. Several other AI Act provisions remain on their original timelines:

  • GPAI (General Purpose AI) rules — which govern foundation models and their providers — remain on schedule
  • Prohibited AI practices — outright bans on social scoring, real-time biometric surveillance in public spaces, and manipulative AI — remain in force
  • Transparency requirements for AI-generated content (deepfake disclosure, chatbot identification) are not affected

The practical result: foundation model providers like Anthropic, OpenAI, and Google face unchanged compliance timelines for GPAI rules. The companies getting relief are those deploying AI in products sold into regulated sectors — medical device manufacturers, HR software vendors, critical infrastructure operators.

Implications for Businesses

If you sell AI-embedded products into EU regulated markets: The 16-month extension gives you more runway, but the December 2027 deadline is still a hard target. Companies that use the extension to defer compliance work entirely — rather than pace it — will face the same crunch, just later.

If you are a compliance or risk team: The delay is not an exemption. Risk management system documentation, training data records, and human oversight architecture all still need to be built. Starting now puts you ahead of the companies that will scramble in 2027.

If you are evaluating EU market entry: The delay reduces near-term compliance costs, but the December 2027 deadline should be factored into product roadmaps with the same weight as any other regulatory deadline.

What to Watch

The Digital Omnibus package must clear the European Parliament and Council — it is a proposal, not finalized law. Watch for Parliament amendments that push back on the delay, particularly from MEPs who have been the AI Act's strongest advocates. The period between now and adoption is also a window for further industry lobbying to weaken the December 2027 requirements themselves. The delay is step one; the content of what companies actually must do by December 2027 is still being negotiated.


Hector Herrera is the founder of Hex AI Systems and editor of NexChron.

Key Takeaways

  • By Hector Herrera | April 15, 2026 | Government
  • Risk management systems
  • Training data governance
  • Accuracy and robustness
  • What the critics say:

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron