The EU AI Act is the world's first comprehensive legal framework for artificial intelligence, adopted by the European Parliament in March 2024. It establishes rules for AI development and use based on a risk-classification system, with higher-risk applications facing stricter requirements. Even if your business is outside the EU, this law likely affects you if you serve European customers or use AI systems that process EU residents' data.
The risk-based framework:
Unacceptable risk (banned): AI systems that pose a clear threat to safety or rights are prohibited outright. This includes social scoring systems by governments, real-time biometric surveillance in public spaces (with limited law enforcement exceptions), AI that manipulates behavior to cause harm, and systems that exploit vulnerable groups.
High risk (heavily regulated): AI used in critical applications must meet strict requirements including risk assessments, data quality standards, human oversight, transparency, and documentation. High-risk categories include:
- AI in hiring and employment decisions
- Credit scoring and insurance assessment
- Medical devices and diagnostic AI
- Law enforcement and judicial systems
- Education and vocational training assessment
- Critical infrastructure management
Limited risk (transparency obligations): Systems like chatbots and AI-generated content must clearly disclose that they are AI. Users must know when they're interacting with an AI system, and AI-generated or manipulated content (deepfakes) must be labeled.
Minimal risk (mostly unregulated): AI in video games, spam filters, inventory management, and similar low-stakes applications can operate freely with no specific requirements.
Key requirements for high-risk AI:
- Risk management system: Ongoing identification and mitigation of risks throughout the AI lifecycle
- Data governance: Training data must be relevant, representative, and free of bias to the extent possible
- Technical documentation: Detailed records of design, capabilities, limitations, and performance
- Record-keeping: Automatic logging of operations for traceability
- Transparency: Clear information for users about the AI's capabilities and limitations
- Human oversight: Meaningful human control over the system's operations
- Accuracy and robustness: Systems must perform reliably and resist manipulation
Penalties: Violations can result in fines up to 35 million euros or 7% of global annual revenue — whichever is higher. These are deliberately severe to ensure compliance.
Timeline: The Act entered into force in August 2024, with requirements phasing in through 2027. Banned practices apply from February 2025. High-risk requirements apply from August 2026. General-purpose AI model rules apply from August 2025.
Impact on US and global companies: Any company that places AI systems on the EU market or whose AI outputs are used in the EU must comply. This includes US companies serving European customers through AI-powered products, which is most major tech companies.
What to do now: Inventory your AI systems, classify them by risk level, identify gaps against the Act's requirements, and begin building compliance processes. Companies that start early will have a competitive advantage over those that scramble at the deadline.