EU institutions are actively considering pushing key AI Act compliance deadlines into 2027–2028. Companies that planned around June 2026 now face material uncertainty about final obligations.
EU AI Act Deadlines Are Slipping — What That Means for Your Compliance Roadmap
The European Union is actively considering pushing key AI Act compliance deadlines from 2026 into 2027 and 2028, according to Eversheds Sutherland's April 2026 Global AI Regulatory Bulletin. For multinationals that built compliance programs around a June 2026 hard deadline, this creates immediate uncertainty: the obligations are real, but the timeline for enforcement may be materially different than what your legal team planned around.
Context
The EU AI Act — the world's first comprehensive binding legal framework for artificial intelligence — entered into force in August 2024. It established a phased implementation schedule: prohibited AI systems banned by February 2025, high-risk system requirements kicking in from August 2026, and general-purpose AI model rules applying from August 2025.
The problem is the scale of the task. Compliance requires classifying every AI system an organization uses or deploys by risk level, implementing conformity assessments (audits confirming a system meets the Act's technical standards) for high-risk systems, registering those systems in an EU database, and establishing governance structures to manage ongoing obligations. For large enterprises operating dozens or hundreds of AI systems across EU member states, this is not a checkbox exercise.
EU institutions have been receiving consistent feedback from industry: the implementation timeline is structurally compressed relative to the compliance infrastructure that actually needs to exist. Notified bodies — the third-party auditors accredited to certify high-risk AI systems — are still being established in many member states. You cannot certify a system when the certifying infrastructure doesn't yet exist at scale.
What's Actually Being Considered
The April bulletin identifies two types of timeline flexibility under discussion:
Extended transition periods for specific categories of high-risk AI systems, particularly those in healthcare, education, and law enforcement — areas where the technical standards for conformity assessment are still being finalized by European standards bodies (CEN-CENELEC).
Delayed enforcement — a distinction from extended compliance deadlines. The EU could hold the legal deadline firm while signaling that enforcement actions will not be initiated against companies demonstrating good-faith compliance efforts until 2027 or 2028. This is the more likely path, as it preserves the legislative framework while creating practical relief.
Get this in your inbox.
Daily AI intelligence. Free. No spam.
No formal revision to the Act has been announced. This is a signals-and-signals phase: EU officials in informal communications, industry working groups receiving softer messaging, and regulatory bulletins flagging the institutional recognition that the timelines are under pressure.
What This Means for Compliance Teams
If you built your roadmap around June 2026: You are in the most uncertain position. The deadline may slip, but there's no guarantee — and starting over with a new target date introduces its own risks. The pragmatic path is to continue building toward your planned milestones while documenting that progress formally. Good-faith effort is the shield most likely to matter in an enforcement-delayed environment.
If you haven't started yet: A deadline slip is not a green light. The substantive work — AI inventory, risk classification, vendor due diligence, governance framework — takes 12 to 24 months for organizations of any scale. Companies that wait for a formal deadline revision will be competing for a sharply constrained supply of AI compliance consultants, legal advisors, and notified body capacity.
For general-purpose AI (GPAI) model providers: The August 2025 obligations for GPAI already apply. The slippage discussion is focused on high-risk system requirements, not the GPAI tier. If you develop or fine-tune large language models for EU markets, your compliance clock is already running.
The Broader Pattern
This is not unique to the EU. The UK's AI governance framework is a voluntary code of practice, not legislation. The U.S. AI executive orders have been partially rescinded and replaced multiple times. China's AI regulations are advancing in a direction that prioritizes state oversight over enterprise compliance.
What companies are navigating in 2026 is a global AI regulatory environment that is simultaneously accelerating in scope and decelerating in enforcement readiness. The gap between what regulators want to require and what the compliance ecosystem can actually support is widening.
The EU AI Act is still the most consequential AI regulatory framework in force. A deadline slip doesn't change that. It changes the tempo, not the direction.
What to Watch
Formal guidance from the European AI Office — established in 2024 to oversee the Act's implementation — is expected in Q2 2026. Any official communication about timeline flexibility will move quickly from a signals conversation to a legal reality. Watch also for member state enforcement agency statements, particularly from Germany and France, which have the most active AI regulatory infrastructure inside the EU.
By Hector Herrera | NexChron | April 29, 2026
Did this help you understand AI better?
Your feedback helps us write more useful content.
Get tomorrow's AI briefing
Join readers who start their day with NexChron. Free, daily, no spam.