The European AI Act entered into force in August 2024 with a staggered application calendar. The first parts, absolute prohibitions on certain uses and basic transparency obligations, started applying in February 2025. General-purpose model obligations arrived in August 2025. And in August 2026, seven months ago now, the Act entered full application for high-risk systems. The first complete compliance cycle ends now, and with it come the first practical conclusions without the institutional narrative.
What’s complied with effortlessly
Some Act obligations have been absorbed easily because they matched practices serious companies already had. Transparency about AI-generated content, mandatory since 2025, has integrated into most products without issue: visible notices on generated images and videos, watermarks when technically feasible, labeling in automated conversations. Compliance cost here is low because the technical work was already done for social pressure prior to the Act.
Prohibition of certain unacceptable uses, like generalized social scoring or biometric identification in public spaces without judicial authorization, also caused no serious problems because no serious European company planned them. The Act acted here as prevention against future commercial pressures rather than immediate practice change.
Risk-management and impact-assessment documentation has been more laborious but manageable for companies with mature GDPR processes. The data-protection impact analysis machinery has extended with reasonable adaptations to the AI case. Companies with established DPO and processes have incorporated AI Act obligations as an additional layer without stirring structures.
What hurt most
The highest cost in the first cycle isn’t in the new obligations but in identifying which systems count as “high risk”. The Act’s definition includes eight concrete categories plus criteria leaving significant gray zones. Companies using AI to filter CVs, decide educational admission, evaluate credit, or prioritize medical emergencies have found their systems fall under high risk even when the assisted function seemed minor.
The classification process has required coordinated legal and technical work many companies didn’t have budget to do well. The practical result in the first cycle is that some organizations over-classified their systems to avoid regulatory risk, taking on compliance load that perhaps didn’t apply, and others under-classified from interpretive optimism, risking sanctions if audit catches it.
The second high cost has been the obligation of effective human supervision in high-risk systems. “Effective” is a hard word: you can’t just put a human in front rubber-stamping everything, but detailed review of every decision isn’t realistic for systems processing thousands daily. Companies have had to design supervision flows with sampling, deviation alerts, and real ability to override individual decisions, and this has required redesigning internal interfaces not built for real human supervision.
The third cost is incident log management. The Act requires documenting relevant AI-system events: detected biases, significant errors, security failures. Setting up this logging infrastructure with integrity, adequate retention, and regulator access on request isn’t trivial, especially for companies whose internal telemetry wasn’t designed for external audit.
What’s de facto ignored
The first widespread non-compliance in the first cycle is notification of users affected by high-risk automated decisions. The Act requires clearly informing when a decision significantly affecting a person was made with an AI system. In practice, many companies notify only when the Act explicitly demands it and avoid doing so in gray zones where they could argue the final decision was human.
The second common non-compliance is training-data documentation quality. The Act asks documenting sources, cleaning processes, known potential biases, and mitigation measures. For own models trained on well-governed data this is manageable; for systems using proprietary third-party models, documentation depends on what the provider publishes, and practice is that many providers publish less than the Act would require if the system were own-built. The European Commission has started pressing large providers but timelines are long.
The third non-compliance is less visible: many companies treated the compliance exercise as static documentation once a year instead of continuous process. The Act implicitly expects risk analysis to update when the system changes, but in practice many orgs produce the initial report, file it, and keep modifying the system without revisiting compliance. This works while nobody audits, and creates accumulated exposure that will eventually surface.
Real sanctions in the first cycle
Sanctions applied in the first cycle have been few but visible. The first significant fine was October 2026 to a French employment platform for a CV-filtering system without adequate human supervision; the figure, two million euros, wasn’t catastrophic but was enough to signal the market. The second, January 2026, to a German bank for credit-scoring without complete training-data documentation. Both sanctions fell well below the regulation’s theoretical maximum, signaling regulators in pedagogical rather than punitive phase.
The strategy of national authorities and the European AI Office has been clear: prioritize evident cases first, use moderate sanctions to set jurisprudence, and let companies adjust. The expectation is that truly high fines will come from 2027 onward once doctrine is established and non-compliances are harder to justify as interpretive error.
What seems to have worked
Two things deserve positive mention in the balance. First, the European AI Office has acted with more pragmatism than many feared. Its guidance documents published during 2025 and 2026 have clarified dark Act points without hardening interpretations beyond reasonable. National regulatory sandboxes, foreseen in the Act, have worked relatively well in France, Germany, and the Netherlands; less in countries with scarcer resources.
Second, the Act has pushed many companies to professionalize AI governance that used to be ad-hoc or nonexistent. Internal committees, specific AI compliance roles, structured review of systems before deployment. This professionalization would have taken many more years without regulation, and though cost is real, benefits of mature governance processes accumulate over time.
What’s missing
The first pending point is real harmonization across national authorities. There are interpretive divergences between Spain, France, Germany, and Italy creating uncertainty for multinational operations. The AI Office has a coordination mandate but first cases show coordination is more declarative than effective. Fixing this is priority for the Act to be seriously operational.
The second pending point is treatment of general-purpose models and their systemic obligations. The Act’s “systemic risk” definition remains ambiguous for concrete cases. The Office has published an initial list of models subject to systemic obligations but entry and exit criteria aren’t fully transparent, generating friction with large providers.
My reading
The first full-application cycle of the European AI Act ends with a more nuanced balance than either enthusiasts or detractors expected. The Act hasn’t destroyed European innovation nor solved all the transparency and bias problems that motivated it. It has introduced moderate friction serious companies absorb without drama, forced many organizations to formalize AI governance, and generated a few visible cases setting precedent.
For companies, the practical lesson is that Act compliance is manageable when treated as continuous process integrated with risk management and data protection, and is expensive and risky when treated as an annual-report one-off project. Most of the real work isn’t producing documentation but redesigning internal flows so human supervision is effective, logs are reliable, and system changes update the analysis. That can’t be improvised and can’t be outsourced; it needs real process change. Companies that got this in the first cycle are well-positioned for the next; those that didn’t will probably hear from the regulator before next August.