Jacar mascot — reading along A laptop whose eyes follow your cursor while you read.
Inteligencia Artificial

EU AI Act: What Changes for Your Company

EU AI Act: What Changes for Your Company

Actualizado: 2026-05-03

The EU AI Act (Regulation 2024/1689) entered force on 1 August 2024. It is the first comprehensive global AI regulatory framework. Implementation is gradual: prohibitions take effect in February 2025, general-purpose AI obligations in August 2025, and high-risk systems in August 2026. This article covers what matters for companies operating in the EU.

Key takeaways

  • Territorial scope is as broad as GDPR: it applies even if the company is outside the EU if it deploys systems there or sells to European customers.
  • Penalties for prohibited systems reach 7% of global turnover — higher than GDPR.
  • High-risk systems include employment, education, essential public services, biometrics, and critical infrastructure.
  • GPAI models with over 10^25 FLOPs of training compute (GPT-4, Claude 3, Gemini) have additional evaluation and risk-management obligations.
  • Building compliance from design is far cheaper than retrofit — the same principle as GDPR.

Risk-based approach

The AI Act classifies systems into four risk levels:

Prohibited systems (from February 2025)

  • Social scoring in the Chinese style.
  • Behavioural manipulation exploiting psychological vulnerabilities.
  • Real-time biometric identification in public spaces (with law-enforcement exceptions).
  • Emotion recognition in workplace or educational environments.

High-risk (August 2026+)

Systems used in:

  • Critical infrastructure.
  • Education (admissions, grading).
  • Employment (hiring, monitoring).
  • Essential public services.
  • Law enforcement, immigration.
  • Justice administration.
  • Biometric identification.

Obligations: risk management, data quality, logging and traceability, transparency, human oversight, accuracy, robustness, and cybersecurity.

Limited risk

  • Chatbots: must disclose AI nature.
  • Deepfakes: mandatory labelling.
  • AI-generated content: disclosure.

Minimal risk

Video games, spam filters, recommenders without significant impact. No major obligations.

General-Purpose AI (GPAI)

Models like GPT-4, Claude, Llama, and Mistral Large are subject to two levels of obligations:

Base tier (all GPAI)

  • Technical documentation.
  • Information for downstream deployers.
  • Respect for copyright.
  • Public summary of training data.

“Systemic risk” tier

Models with training compute exceeding 10^25 FLOPs:

  • Model evaluation before release.
  • Documented risk assessment.
  • Red-teaming (adversarial testing).
  • Cybersecurity protections.
  • Serious incident reporting to the competent authority.

Key deadlines

Date Milestone
1 Aug 2024 Entry into force
Feb 2025 Prohibited systems effective
Aug 2025 GPAI obligations
Aug 2026 High-risk systems
Aug 2027 Finalisation of some exceptions

Does it apply to your company?

The AI Act applies if you:

  • Develop an AI system in the EU.
  • Deploy an AI system in the EU, even if the company is outside.
  • Sell to EU customers (territorial scope similar to GDPR).
  • Have EU employees using AI systems — downstream obligations apply.

Penalties

  • Prohibited AI: up to €35M or 7% of global turnover.
  • High-risk non-compliance: up to €15M or 3%.
  • Incorrect information to authority: up to €7.5M or 1%.

Significantly higher than GDPR.

Action plan

Immediate

  • Inventory AI systems in use or development.
  • Classify risk level per system.
  • Gap analysis against applicable requirements.

Before February 2025

  • Eliminate or redesign any system in the prohibited category.

Before August 2025

  • Complete GPAI documentation if applicable.
  • Staff training for those operating AI systems.

Before August 2026

  • High-risk compliance with certified conformity where required.

Interaction with GDPR

The AI Act complements but does not replace GDPR. When an AI system processes personal data both frameworks apply simultaneously:

  • Personal data processing by AI continues to follow GDPR.
  • A DPIA is mandatory for high-risk AI processing personal data.
  • The right to explanation in automated decisions (GDPR Art. 22) remains in force.
  • Data minimisation applies to training and inference.

For deployment strategies that consider data sovereignty, see open LLMs: the enterprise choice — open-weight models with on-premise inference simplify compliance with both frameworks.

Open-source exemption

Open-weight models have some exemptions, but with important nuances:

  • Lighter training-data transparency obligations.
  • Reduced burden if non-profit.
  • Important exception: if the model is offered as a service, obligations are equivalent to those for a closed model.

This directly affects projects like Llama or Mistral when served via their own API — see Mistral Large: the European contender.

Regulatory sandbox

Member states create regulatory sandboxes for testing compliance in a controlled environment, with the possibility of consulting regulators with reduced enforcement risk. For startups, this is the opportunity to navigate complexity before scaling.

Startup impact

  • High-risk AI: significant compliance cost (potentially in the millions range).
  • GPAI obligations: manageable with documentation discipline from day one.
  • Limited risk: minimal impact.

The lesson for startups: integrating compliance from design is incomparably cheaper than retrofit. The same principle that applies to security and accessibility applies here.

International coordination

The AI Act establishes a de facto global standard — similar effect to GDPR:

  • The UK AI Safety Institute maintains cooperation with the EU.
  • The US Executive Order has some alignment points.
  • The Bletchley Declaration establishes an international coordination track.

Companies operating in multiple jurisdictions would do well to adopt the AI Act as their global compliance baseline.

Conclusion

The AI Act is a regulatory reality that companies with EU operations cannot ignore. Gradual deadlines allow sensible preparation, but starting now is prudent: AI system inventory, risk classification, and gap analysis all take time. For startups, building compliant by design is cheaper than retrofit. For established companies, a structured action plan reduces execution risk. Those who prepare earlier have real competitive advantage — compliance is not a pure cost, it is an entry barrier for those who come late.

Was this useful?
[Total: 0 · Average: 0]

Written by

CEO - Jacar Systems

Passionate about technology, cloud infrastructure and artificial intelligence. Writes about DevOps, AI, platforms and software from Madrid.