Jacar mascot — reading along A laptop whose eyes follow your cursor while you read.
Inteligencia Artificial

European AI Act: enforcement kicking in from August 2025

European AI Act: enforcement kicking in from August 2025

Actualizado: 2026-05-03

2 August 2025 marks a date we had been watching for years on compliance calendars. It is the day the EU AI Act, published in the Official Journal in August 2024, begins applying its second block of obligations: the general-purpose model regime, the designation of national authorities, and the penalty framework. This post is a practical summary of what really changes for teams shipping AI systems in Europe, written from the operator’s chair, not the lawyer’s.

Key takeaways

  • The AI Act calendar has three dates: February 2025 (prohibited practices), August 2025 (general-purpose models and penalties), and August 2026 (high-risk systems).
  • Large foundation model providers must comply from August 2025 with technical documentation, copyright policy, and training data summary.
  • The systemic risk threshold is 10²⁵ floating-point training operations — it covers today’s frontier models from the major labs.
  • The highest fine tier is up to 7% of global annual turnover for prohibited practices; for incorrect information to authorities, up to 1%.
  • AESIA in Spain takes the national supervisory role; the European AI Office directly oversees providers of general-purpose models with systemic risk.

What takes effect on 2 August

The AI Act does not kick in all at once, and understanding this is important before any discussion. The calendar has three key dates:

  • February 2025: prohibited practices
  • August 2025: general-purpose models, governance, and penalties
  • August 2026: high-risk systems

What activates now are the Chapter V obligations on general-purpose models, the articles on governance, and the fines tied to non-compliance.

In practice this means that providers of large foundation models — OpenAI, Anthropic, Google, Mistral, Meta — must comply from this date with concrete obligations: technical documentation, copyright policy, training data summary, and notification to the Commission if the model crosses the systemic risk threshold. For new models published from August 2025, the obligation applies immediately. For models already on the market there is an adaptation period until August 2027.

What being a general-purpose model entails

The law defines a general-purpose model as one trained on a significant amount of data and able to perform a wide variety of tasks, suitable for downstream integration. The technical definition includes a quantitative threshold: models trained with more than 10²⁵ floating-point operations are presumed to carry systemic risk. This figure was chosen with GPT-4 as a reference and today covers the frontier models from the major labs.

For most companies that use these models as customers rather than providers, the law imposes no new obligations on this date. What changes is that customers can require technical documentation and copyright policies from the provider, turning some standard API contracts into negotiable documents.

National authorities and the European office

Each member state must designate competent national authorities before 2 August. In Spain, AESIA (operating since 2023) takes this role with its headquarters in A Coruña. The European AI Office, inside the Commission, coordinates cross-border supervision and directly oversees providers of general-purpose models with systemic risk.

This two-layer structure is deliberate: national authorities deal with specific uses, the European office handles base models. With designations closed and offices staffed, there is now a clear channel to report to, consult, and request interpretive guidance.

European Parliament session in Strasbourg where the AI Act was debated — August 2025 obligations now affect model providers and teams integrating AI in European products

Fines that actually hurt

The law sets three fine tiers:

Infringement Up to
Prohibited practices 7% of global annual turnover or €35 M
General non-compliance 3% of global turnover or €15 M
Incorrect information to authorities 1% of global turnover or €7.5 M

The amounts are deliberately comparable to those in the GDPR and are calculated on global turnover, not only European.

The tier getting the most attention is the first, because prohibited practices are the most concrete and easiest to evaluate. Social scoring systems, emotion recognition in education or workplaces, mass scraping of facial images from the internet for databases — the law names these explicitly. If a company has used any of these since February, it is now exposed to real financial penalties.

The friction point: transparency for generative AI

One of the most debated parts has been the transparency requirement for AI-generated content. From August, systems generating synthetic audio, image, video, or text must mark the output as artificial through technical means when technically feasible. The law mentions watermarks, metadata, or similar.

The Commission published guidance in June clarifying that the obligation is considered met if best available techniques are applied — it does not require perfection. This gives providers room but opens future litigation when better techniques existed and were not adopted. Teams embedding generative AI in products must document which technique they apply and why.

What to do in August if you operate in Europe

My practical checklist for teams with AI systems in production has four points:

  1. Classify each AI use case under the law’s categories: general-purpose, high-risk, limited, minimal.
  2. Review contracts with foundation model providers to ensure access to the technical documentation the law requires for downstream compliance.
  3. Implement traceability on when AI is used and with which model — in case of an incident the law requires reconstructing the flow.
  4. Map which national authority covers you and register a formal contact channel.

Most teams have the first two points half done and the last two untouched. The August date does not trigger automatic fines on those aspects — enforcement starts but is not applied with a heavy hand from day one. Authorities have signaled they will prioritize clear cases and prohibited practices. But if a 2026 incident requires explaining that a system was never classified, the potential fine grows.

How to think about the decision

The European law is the first comprehensive AI regulation in the western world, and that brings the Brussels effect whether you want it or not. American and Asian companies that sell in Europe are adapting their documentation to comply, which de facto extends the framework beyond the union. For those of us operating in Europe, this has competitive upside: a clear framework beats the uncertainty of adapting to fifty different US jurisdictions.

What I do not share in the usual debate tone is the idea that the law hampers innovation. August obligations do not prevent training new models or deploying new products. What they do is force documentation and risk thinking before deployment. Serious teams already did this. Teams that did not now have a calendar and an economic incentive to start. That looks like a net improvement.

The part that worries me is the administrative load on small companies. The law foresees proportionality but the concrete mechanics of that proportionality are still unclear. If a ten-person startup has to dedicate one person to compliance mapping for a quarter, the math hurts. The next Commission guidance bundle should clarify this. Until then, the recommendation is to document the essentials and wait to see how it is applied in real cases before investing in expensive compliance infrastructure.

Was this useful?
[Total: 15 · Average: 4.1]

Written by

CEO - Jacar Systems

Passionate about technology, cloud infrastructure and artificial intelligence. Writes about DevOps, AI, platforms and software from Madrid.