Jacar mascot — reading along A laptop whose eyes follow your cursor while you read.
Inteligencia Artificial Metodologías

AI governance in enterprise: committees, policies, audits

AI governance in enterprise: committees, policies, audits

Actualizado: 2026-05-03

Enterprise AI governance stopped being an academic exercise when the first EU AI Act provisions came into force: prohibition of certain practices (social scoring, subliminal manipulation, sensitive biometric categorization) and the requirement that staff using AI reach a minimum literacy level. Sanctions for prohibited practices can reach 7 percent of global annual turnover, which focuses even the most distracted board’s attention.

This piece shares what works when setting up or accompanying AI governance across several organizations. It is not a legal analysis of the AI Act, but a practical guide on how the work gets organized: committee, policies, inventory, risk assessment, audits.

Key takeaways

  • An AI committee works best with five to seven decision-makers, not fifteen in advisory roles.
  • The usage policy must separate three layers: approved tools, permitted use cases, and cross-cutting obligations.
  • Without a living systems inventory there is no possible governance; the registry comes from processes, not manual maintenance.
  • A five-dimension risk assessment guides the level of controls needed for each system.
  • Well-executed internal audits build capability; external ones validate for clients and regulators.

The AI committee and who it should include

The most frequent mistake is setting up an AI committee with only technical profiles. Governance is precisely about things beyond the technical: impact on people, legal compliance, reputation, cost. A useful committee combines four sensibilities:

  • Technical (engineering, data, security).
  • Legal and compliance.
  • Business and risk.
  • Ethics or HR representing the affected employee or customer.

Ideal size is small: five to seven people with decision power, not fifteen in advisory roles. The cadence that works is monthly meeting with prior reading, and extraordinary sessions for urgent cases. The committee must have a written mandate clarifying what it decides itself, what it escalates to the executive committee, and what is operational.

A question worth resolving early is where it sits organizationally. The three common patterns are:

  • Reporting to the CIO or CTO.
  • To the CDO or data leader.
  • To the compliance director.

What is harmful is reporting to the area proposing the most AI use cases, because it creates structural conflict of interest. More on the regulatory context in our guide on EU AI Act compliance.

Acceptable use policy, not just security policy

The AI use policy must cover things a traditional IT security policy doesn’t. Separating three layers works well:

  1. Approved tools layer: which specific products can be used, on which corporate account, under which license.
  2. Permitted use cases layer: internal email drafting, programming assistance, summaries, and translation; differentiated from those requiring explicit authorization (customer data analysis, decisions about people, direct customer contact).
  3. Cross-cutting obligations layer: marking generated content, preserving evidence, not loading secrets into prompts.

A temptation to avoid is writing a very long policy no one will read. Better a short policy, readable in twenty minutes, accompanied by concrete examples per functional area. The AI Act’s literacy requirement is better met with workshops and real cases than with a fifty-page PDF buried in the intranet.

Systems inventory: the invisible foundation

Without an AI systems inventory there is no possible governance. In most organizations the inventory doesn’t exist or is outdated. The AI Act mandates registering high-risk systems, but useful governance requires registering all of them.

Minimum fields for a usable registry:

  • System name and operating area.
  • Use case.
  • Underlying models and providers.
  • Training or fine-tuning data if any.
  • Inference data it accesses.
  • Output recipients.
  • Internal risk classification.
  • Date of last review and responsible person.

With that registry you can answer important questions: which systems make decisions about people, which send personal data to third parties, which will be covered by high-risk obligations when they enter into force.

The hard part of the inventory is keeping it alive. The recommendation is integrating registration into the purchasing cycle and the change cycle: any new software with an AI component goes through a form updating the inventory, and any significant change reflects in its record. The inventory is not a file someone maintains by hand; it is the consequence of well-defined operational processes.

Risk assessment and escalation

The AI Act classifies systems into four levels: unacceptable risk (prohibited), high risk, limited risk (transparency obligation), and minimal risk. A framework that works evaluates five dimensions per system:

  1. Impact on people’s rights (employees, customers, third parties).
  2. Sensitivity of data it processes.
  3. Degree of decision autonomy (purely assistive vs. deciding without human).
  4. Reversibility of effects (easy to undo vs. hiring or credit decisions).
  5. External provider dependency and associated risks (contractual lock-in, service continuity, data access).

The combined score guides the required level of controls:

  • Low risk: self-manages under the general policy.
  • Medium risk: formal owner, monitoring plan, and annual review.
  • High risk: committee assessment, detailed impact evaluation, technical traceability and human oversight controls, semiannual review.

For security teams integrating these evaluations with the technical stack, the article on Zero Trust and SIEM integration provides useful complementary context.

Audits: internal first, external later

AI audits have two complementary angles:

  • Compliance: verifies systems align with policies and regulation (consents obtained, data retained only as long as planned, decisions documented, access limited).
  • Quality: verifies systems work as expected (error rate, fairness across demographic groups, model drift, sufficient explainability).

The advice is starting with well-executed internal audits before hiring external auditors. The internal team knows the systems and has access to data a third party takes time to obtain. Internal audits build organizational capability and catch problems earlier. External ones are useful when independent validation is needed for clients, regulators, or certifications, but they should not be the first step.

A practice worth adopting is a quarterly self-audit based on a short checklist. For each medium or high-risk system, ten or fifteen questions answered with evidence: is the planned human review being executed? are agreed logs preserved? are there undocumented incidents? is the owner current?

My read

Enterprise AI governance determines whether the organization can sustain AI use in the coming years without surprises. The AI Act sets a compliance floor, but the deep reason to do this well isn’t the fine: it’s that organizations without inventory, policy, and committee will learn late about problems already happening inside their flows.

Governance should not be a brake on AI use, but an enabler: when rules are clear, teams dare to try more things because they know where they can move. Organizations using governance to forbid everything end with shadow AI (people using ChatGPT from personal phones), which is the worst of all worlds. Those using governance to enable with criteria advance faster than those who have none at all.

Was this useful?
[Total: 0 · Average: 0]

Written by

CEO - Jacar Systems

Passionate about technology, cloud infrastructure and artificial intelligence. Writes about DevOps, AI, platforms and software from Madrid.