2 August 2025 marks a date we had been watching for years on compliance calendars. It is the day the EU AI Act, published in the Official Journal in August 2024, begins applying its second block of obligations: the general-purpose model regime, the designation of national authorities, and the penalty framework. This post is a practical summary of what really changes for teams shipping AI systems in Europe, written from the operator’s chair, not the lawyer’s.
What takes effect on 2 August
The AI Act does not kick in all at once, and understanding this is important before any discussion. The calendar has three key dates: February 2025 for prohibited practices, August 2025 for general-purpose models and penalties, and August 2026 for high-risk systems. What activates now are the Chapter V obligations on general-purpose models, the articles on governance, and the fines tied to non-compliance.
In practice this means that providers of large foundation models, think OpenAI, Anthropic, Google, Mistral, Meta, must comply from this date with concrete obligations: technical documentation, copyright policy, training data summary, and notification to the Commission if the model crosses the systemic risk threshold. For new models published from August 2025, the obligation applies immediately. For models already on the market there is an adaptation period until August 2027.
What being a general-purpose model entails
The law defines a general-purpose model as one trained on a significant amount of data and able to perform a wide variety of tasks, suitable for downstream integration. The technical definition includes a quantitative threshold: models trained with more than 10 to the 25th floating-point operations are presumed to carry systemic risk. This figure was chosen with GPT-4 as a reference and today covers the frontier models from the major labs.
For most companies that use these models as customers rather than providers, the law imposes no new obligations on this date. What changes is that customers can require technical documentation and copyright policies from the provider, turning some standard API contracts into negotiable documents. I have seen several corporate procurement rounds reopen this quarter because compliance teams requested addenda that had not been drafted before.
National authorities and the European office
Each member state must designate competent national authorities to oversee application before 2 August. In Spain, AESIA, operating since 2023, takes this role with its headquarters in A Coruna. The European AI Office, inside the Commission, coordinates cross-border supervision and directly oversees providers of general-purpose models with systemic risk. This two-layer structure is deliberate: national authorities deal with specific uses, the European office handles base models.
Operationally the relevant thing is that there is now a clear channel to report to, consult, and request interpretive guidance. During the two-year transition, this authority layer was more theoretical than practical. With designations closed and offices staffed, you can start consulting fuzzy issues instead of speculating internally.
Fines that actually hurt
The law sets three fine tiers. For prohibited practices, up to 7% of global annual turnover or 35 million euros, whichever is higher. For general non-compliance, up to 3% or 15 million. For incorrect information supplied to authorities, up to 1% or 7.5 million. The amounts are deliberately comparable to those in the General Data Protection Regulation and are calculated on global turnover, not only European.
The tier getting the most attention is the first, because prohibited practices are the most concrete and easiest to evaluate. Social scoring systems, emotion recognition in education or workplaces, mass scraping of facial images from the internet for databases, are examples the law names explicitly. If a company has used any of these since February, it is now exposed to real financial penalties.
The friction point: transparency for generative AI
One of the most debated parts during the legislative process has been the transparency requirement for AI-generated content. From August, systems generating synthetic audio, image, video, or text must mark the output as artificial through technical means when technically feasible. The law mentions watermarks, metadata, or similar. The open discussion is that no robust watermark exists today for generated text, and image metadata is easily lost on re-sharing.
The Commission published guidance in June clarifying that the obligation is considered met if best available techniques are applied, it does not require perfection. This gives providers room but opens future litigation when it is shown that better techniques existed and were not adopted. Teams embedding generative AI in products must document which technique they apply and why.
What to do in August if you operate in Europe
My practical checklist for teams with AI systems in production in August 2025 has four points. First, classify each AI use case under the law’s categories: general-purpose, high-risk, limited, minimal. Second, review contracts with foundation model providers to ensure access to the technical documentation the law requires for downstream compliance. Third, implement traceability on when AI is used and with which model, because in case of an incident the law requires reconstructing the flow. Fourth, map which national authority covers you and register a formal contact channel.
Most teams I have seen have the first two points half done and the last two untouched. The August date does not trigger automatic fines on those aspects, enforcement starts but is not applied with a heavy hand from day one. Authorities have signaled they will prioritize clear cases and prohibited practices. But if a 2026 incident requires explaining that a system was never classified, the potential fine grows. That is reputational and financial risk worth closing.
How to think the decision
The European law is the first comprehensive AI regulation in the western world, and that brings the Brussels effect whether you want it or not. American and Asian companies that sell in Europe are adapting their documentation to comply, which de facto extends the framework beyond the union. For those of us operating in Europe, this has competitive upside: a clear framework beats the uncertainty of adapting to fifty different US jurisdictions.
What I do not share in the usual debate tone is the idea that the law hampers innovation. August obligations do not prevent training new models or deploying new products. What they do is force documentation and risk thinking before deployment. Serious teams already did this. Teams that did not now have a calendar and an economic incentive to start. That looks like a net improvement to me.
The part that worries me is the administrative load on small companies. The law foresees proportionality but the concrete mechanics of that proportionality are still unclear. If a ten-person startup has to dedicate one person to compliance mapping for a quarter, the math hurts. The next Commission guidance bundle, expected in autumn, should clarify this. Until then, the recommendation is to document the essentials and wait to see how it is applied in real cases before investing in expensive compliance infrastructure.