Jacar mascot — reading along A laptop whose eyes follow your cursor while you read.
Inteligencia Artificial

Generative AI and Regulation: First Legislative Steps

Generative AI and Regulation: First Legislative Steps

Actualizado: 2026-05-03

Generative AI went from academic experiment to mass-consumer technology in less than 18 months. This has accelerated regulatory efforts worldwide, with Europe, the United States, and the United Kingdom taking notably different approaches. For teams integrating LLMs into products, understanding the emerging regulatory landscape is now product work, not just legal.

Key takeaways

  • The EU AI Act uses a four-tier risk system; foundation models have their own transparency, risk-assessment, and incident-management obligations since the Parliament’s June proposal.
  • The US opted for voluntary commitments from seven labs + the NIST AI Risk Management Framework; real risk is civil litigation, not immediate fines.
  • The UK adopted a “pro-innovation” approach with five principles delegated to existing sectoral regulators — more agile but with gaps between regulators.
  • Several concrete obligations emerge convergently: generated-content marking, training-data transparency, and rights to challenge automated decisions.
  • The most useful action now for any product team is to inventory AI use cases, document limitations, and create feedback channels.

EU AI Act: the most comprehensive approach

The EU AI Act[1], proposed by the Commission in 2021 and currently in trilogue between Commission, Parliament, and Council, is the most complete regulation in preparation. Key draft points:

  • Risk-tier system. Four categories: unacceptable (banned), high (strict regulation), limited (transparency), minimal (no special obligations).
  • Foundation models. Added in the Parliament’s June proposal: transparency obligations on training data, risk assessment, and incident management.
  • Fines up to 6% of global turnover for serious non-compliance — more severe than GDPR.

Teams developing or integrating AI into commercial products in the EU should start mapping which of their use cases will fall into “high risk”: employment, credit, education, essential services, justice.

United States: executive order and NIST framework

The US has taken a more fragmented path. Relevant components:

  • Voluntary commitments (July): seven major AI labs — OpenAI, Anthropic, Google, Microsoft, Meta, Amazon, Inflection — signed voluntary commitments including external red-teaming, generated-content marking, and safety research. Not law, but establishes expectations.
  • NIST AI Risk Management Framework[2] (January): technical framework for evaluating and mitigating AI risks. Voluntary, but referenced in government procurement.
  • Pending executive order: the administration has indicated a broader executive order is in preparation.

For tech companies with US presence, the risk isn’t so much immediate fines as civil lawsuits — several already active over copyright training data use.

United Kingdom: “pro-innovation” approach

The UK government White Paper[3] (March) proposes an alternative to the European approach: instead of a transversal law, five principles applied by existing sectoral regulators (ICO, FCA, CMA, Ofcom, HSE):

  1. Safety and robustness.
  2. Transparency and explainability.
  3. Fairness.
  4. Accountability and governance.
  5. Contestability and redress.

This approach is theoretically more agile, but it leaves gaps between regulators and creates uncertainty about which regulator applies to which case.

EU AI Act risk-tier diagram: unacceptable, high, limited, and minimal categories with examples from each

Concrete obligations emerging

Several obligations appear repeatedly in drafts across jurisdictions:

Generated-content marking

Content Authenticity Initiative[4] and C2PA[5] are being promoted as standards. The EU AI Act requires it explicitly for deepfakes.

Training-data transparency

The European draft requires publishing a “sufficiently detailed” summary of copyright-protected data used in training. Interpretation of “sufficiently detailed” remains open.

Rights of affected subjects

Challenging automated decisions, explanation, and rectification. This partially exists already in GDPR Article 22, but the EU AI Act reinforces it.

Risk assessment

Systematic documentation of use cases, identified risks, and mitigation measures. The NIST AI RMF provides a useful template, regardless of jurisdiction.

What to do now as a product team

Three practical actions independent of specific country:

  1. Inventory generative-AI use cases in your product. Identify which process personal data, make decisions about people, or generate content that could deceive.
  2. Document limitations and expected behaviour. Doesn’t replace a formal assessment, but builds a base for compliance when regulation specifies.
  3. Create feedback and correction channels. If the AI makes mistakes, users must be able to report them, have them investigated, and have them corrected. This will be formally required in nearly every framework.

Also see the NIS2 directive — cybersecurity and AI regulation converge on many operational obligations. To understand what the models themselves do, see how to install Ollama and OpenAI’s code-interpreter.

Map of three AI regulatory frameworks: EU AI Act, US NIST framework, and UK White Paper

Conclusion

Generative AI regulation is in its early phase but advancing quickly. Europe leads with the most comprehensive proposal; the US moves through voluntary and litigation paths; the UK attempts a middle-ground approach. For any product using AI, starting to document use cases, risks, and mitigations now is investment in the regulation that will come.

Was this useful?
[Total: 12 · Average: 4.3]
  1. EU AI Act
  2. NIST AI Risk Management Framework
  3. UK government White Paper
  4. Content Authenticity Initiative
  5. C2PA

Written by

CEO - Jacar Systems

Passionate about technology, cloud infrastructure and artificial intelligence. Writes about DevOps, AI, platforms and software from Madrid.