Glossary · Regulatory

EU Artificial Intelligence Act (EU AI Act).

The European Union's risk-based regulation of artificial intelligence systems and general-purpose AI models, with extraterritorial reach over providers and deployers serving the EU market.

AI ACT.

What it is.

The EU Artificial Intelligence Act, Regulation (EU) 2024/1689, was adopted by the European Parliament on March 13, 2024 and by the Council on May 21, 2024, with final adoption on June 13, 2024 and publication in the Official Journal on July 12, 2024. It entered into force August 1, 2024. The regulation is the first comprehensive horizontal AI regulation in any major jurisdiction. It applies risk-based classification of AI systems and separate rules for general-purpose AI models, with extraterritorial reach under Article 2 over providers and deployers established outside the EU where the output of the AI system is used in the EU.

Risk classification operates in four tiers. Prohibited practices under Article 5 include subliminal manipulation, exploitation of vulnerabilities, social scoring by public authorities, real-time remote biometric identification in publicly accessible spaces (with narrow exceptions for law enforcement), emotion recognition in workplaces and educational institutions, biometric categorisation inferring sensitive attributes, and predictive policing based solely on profiling. High-risk systems under Annex III include biometric identification, critical infrastructure, education and vocational training, employment, access to essential services, law enforcement, migration and asylum, and administration of justice. High-risk systems must meet requirements on risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, cybersecurity, and post-market monitoring (Articles 8-15).

General-purpose AI models face separate obligations under Chapter V. All GPAI providers must maintain technical documentation, publish a summary of training content, comply with EU copyright law including text-and-data-mining opt-outs, and provide information to downstream providers. GPAI models with systemic risk (defined by the 10^25 floating-point operations training-compute threshold under Article 51, subject to revision) face additional obligations on model evaluation, adversarial testing, systemic-risk assessment and mitigation, serious-incident reporting, and cybersecurity protection. The European AI Office within the European Commission supervises GPAI providers directly and works with national competent authorities on AI system supervision.

Cross-border implication.

For US AI providers (OpenAI, Anthropic, Google, Microsoft, Meta, Amazon Bedrock, plus mid-cap and startup providers), the EU AI Act reaches them directly when EU users access their models, when their outputs are used by EU deployers, or when their models are integrated into EU-placed AI systems. GPAI obligations apply at the model level regardless of EU establishment. High-risk system providers face conformity-assessment obligations and CE-marking-equivalent EU AI Act compliance documentation. The phased application means the first hard gates are February 2, 2025 (prohibitions and AI literacy obligations), August 2, 2025 (GPAI rules), and August 2, 2026 (most high-risk-system rules).

For EU firms with US operations, the AI Act applies to EU-deployed AI systems and to outputs used in the EU regardless of where the underlying model was developed. The compliance build typically integrates with the firm's existing GDPR program, given the data-protection overlay and the DPIA-and-FRIA convergence on high-risk systems involving personal data.

Where this shows up on the GMA work.

The EU AI Act sits on the AI-product trajectory work across the Operators entering the US book in reverse for US AI firms selling to EU buyers, on the AI hub work where present, on the Answers hub for transatlantic AI-product questions, and on the GDPR related entry. The presentation work covers how the firm names its AI-Act-readiness posture, its high-risk-system classification, its GPAI-with-systemic-risk status (where applicable), and its EU AI Office engagement on US- and EU-facing surfaces. The compliance program belongs with EU AI regulatory counsel and the firm's AI governance function.

Scope note.

Global Marketing Agency does not provide EU AI Act compliance program design, conformity-assessment work, FRIA preparation, GPAI codes-of-practice negotiation, or AI Office engagement. Those activities belong to EU AI regulatory counsel and the firm's AI governance function. GMA works on how the firm's AI-Act posture is presented, sequenced, and read on US- and EU-facing surfaces.

If a US AI firm is sitting on an EU AI Act readiness question, a GPAI obligation, or a high-risk-system classification.

Send the model card, the deployment posture, and the EU customer book. Response within one business day.

Start the conversation

Sources cited on this page: Future of Life Institute, EU AI Act explorer, Regulation (EU) 2024/1689, EU AI Act, European Commission, AI Office, European Commission, Regulatory framework on AI, Council of the EU, Artificial Intelligence, European Parliament, EU AI Act overview.

Start the conversation