In Depth

The EU AI Act, which entered into force in August 2024, is the world's first comprehensive AI regulation. It classifies AI systems into risk tiers: unacceptable risk (banned, including social scoring and certain biometric surveillance), high risk (requiring conformity assessments, documentation, and human oversight), limited risk (requiring transparency measures), and minimal risk (no specific requirements).

High-risk AI systems, which include those used in hiring, credit scoring, law enforcement, healthcare, and critical infrastructure, must meet extensive requirements: risk management systems, data governance standards, technical documentation, record-keeping, transparency measures, human oversight capabilities, and accuracy and robustness guarantees. General-purpose AI models (including LLMs) face additional transparency requirements.

The EU AI Act has global impact because companies serving EU citizens must comply regardless of where they are headquartered, similar to the 'Brussels Effect' of GDPR. Many organizations are adopting EU AI Act compliance as their global standard rather than maintaining different practices for different regions. For AI vendors, compliance readiness is becoming a competitive differentiator and a prerequisite for enterprise sales in European markets.