EU AI Act Compliance 2026: What GPAI Providers Must File by August
The August 2026 EU AI Act deadlines are real. The technical files, transparency reports, and incident docs GPAI providers actually have to ship.
The Deadlines That Matter
The EU AI Act entered into force in August 2024 with staged effective dates. By April 2026 most teams have absorbed the prohibited-practices and transparency provisions. The next big milestone is August 2, 2026 — when the General-Purpose AI (GPAI) provider obligations apply to models placed on the EU market.
This piece walks through what GPAI providers actually have to file, with practical detail rather than treaty-language summaries.
Who Counts as a GPAI Provider
flowchart TD
Q1{Develops a general-purpose<br/>AI model?} -->|Yes| Q2
Q1 -->|No| Skip[Article 50 transparency<br/>still applies]
Q2{Places it on the EU<br/>market or puts into service?} -->|Yes| GPAI[GPAI provider obligations]
Q2 -->|No| OutOfScope[Out of scope]
A "general-purpose AI model" is one that can be applied to a wide range of distinct tasks. Foundation models, frontier LLMs, and large open-weights releases are all in scope. Smaller fine-tunes are also in scope if they substantially modify the base.
If you fine-tune an existing GPAI for a narrow use case, you are typically a downstream deployer, not a GPAI provider. Important distinction — different obligations apply.
The Four Pillars of GPAI Compliance
Pillar 1: Technical Documentation
Per Article 53 + Annex XI, providers must maintain technical documentation on:
- Tasks the model can perform
- Acceptable-use policy
- Training, testing, validation procedures
- Architecture, modalities, license
- Energy consumption
- Training-data summary (in a Commission-provided template)
The training-data summary is the lightning rod. The template asks for a "sufficiently detailed summary" of training data sources. Frontier providers have published high-level summaries; the EU AI Office is expected to publish stricter guidance through 2026.
Pillar 2: Information for Downstream Deployers
Providers must give downstream users enough information to comply with the Act themselves. In practice this means:
See AI Voice Agents Handle Real Calls
Book a free demo or calculate how much you can save with AI voice automation.
- A "Modelfile" of capabilities and limitations
- Performance evaluations on representative tasks
- Known biases and risks
- Intended-use and excluded-use guidance
Pillar 3: Copyright Compliance
Providers must publicly summarize training data and have a copyright-compliance policy. The Commission's template walks through how to do this without revealing trade secrets.
Pillar 4: Open-Source Carve-Outs
Open-source models with publicly disclosed weights, architecture, and training info get exemptions from some obligations — except if they are systemic-risk models.
Systemic-Risk Models
flowchart TD
Train[Training compute] --> Check{>= 10^25 FLOPs?}
Check -->|Yes| Sys[Systemic-risk presumption]
Check -->|No| Reg[Regular GPAI obligations]
Sys --> Notify[Must notify Commission]
Sys --> Extra[Adversarial testing,<br/>incident reporting,<br/>cybersecurity safeguards]
Models trained with >= 10^25 FLOPs are presumed to have systemic risk. They get extra obligations:
- Adversarial testing (red-teaming)
- Incident reporting
- Cybersecurity protections
- Energy consumption tracking
The 10^25 FLOP threshold is rough. By 2026 most frontier models from OpenAI, Anthropic, Google, Meta, and the strongest Chinese open-weights are above it.
What "Filing" Looks Like in Practice
There is no public filing for GPAI obligations — the documents are kept by the provider and shown to the AI Office on request. But the AI Office has signaled it will request documentation upon perceived non-compliance, and the Code of Practice that 200+ companies signed in 2025 included voluntary public disclosure.
The Code of Practice
The voluntary GPAI Code of Practice signed in 2025 is the de facto compliance standard. Signatories include OpenAI, Anthropic, Google, Microsoft, Mistral, and many others. Notably absent: some open-source-focused providers and some Chinese providers.
Following the Code of Practice is the lowest-friction path to demonstrating compliance. The Code is a few hundred pages of prescriptive guidance on technical docs, transparency, and risk management.
Practical Steps for a Mid-Sized GPAI Provider
For teams shipping a GPAI in the EU in 2026:
- Adopt the Code of Practice
- Build the Annex XI technical file (template available from the AI Office)
- Write a training-data summary using the AI Office template
- Write a downstream-deployer information sheet
- Set up an incident-reporting workflow (MCP-shaped reports go to the AI Office)
- Pre-deployment red-teaming with documented results
- Open a public copyright-compliance policy
Most teams have stood this up over Q1-Q2 2026 to be ready for August.
Downstream Deployer Obligations
If you fine-tune a GPAI for a narrow use case, you are a deployer. Your obligations are different:
- Use the model within the GPAI provider's stated intended use
- Comply with high-risk-AI obligations if your use case is high-risk (Annex III)
- Maintain logging and human oversight as your application requires
- Notify users when they are interacting with AI
Sources
- EU AI Act full text — https://eur-lex.europa.eu/eli/reg/2024/1689
- AI Office GPAI guidance — https://digital-strategy.ec.europa.eu
- Code of Practice — https://digital-strategy.ec.europa.eu/en/policies/ai-code-practice
- Commission training-data template — https://digital-strategy.ec.europa.eu
- EU AI Act compliance checker — https://artificialintelligenceact.eu
Try CallSphere AI Voice Agents
See how AI voice agents work for your industry. Live demo available -- no signup required.