AI governance in enterprise: committees, policies, audits

Gráfico académico que cuantifica el volumen y el tipo de instituciones que publican guías de gobernanza de IA, reflejo de cómo el tema ha pasado de idea abstracta a trabajo concreto en empresas

Enterprise AI governance stopped being an academic exercise on February 2nd, when the first EU AI Act provisions came into force: prohibition of certain practices (social scoring, subliminal manipulation, sensitive biometric categorization, among others) and the obligation that staff using AI reach a minimum literacy level. Sanctions for prohibited practices can reach 7 percent of global annual turnover, which focuses even the most distracted board’s attention. And this is only the beginning: August brings general-purpose model obligations, and from 2026 high-risk system duties kick in.

In this piece I want to share what I’ve seen work when setting up or accompanying AI governance in several organizations. This isn’t a legal analysis of the AI Act (better sources exist), but a practical guide on how the work gets organized: committee, policies, inventory, risk assessment, audits.

The AI committee and who it should include

The most frequent mistake I see is setting up an AI committee with only technical profiles. It sounds coherent (they understand the technology), but governance is precisely about things beyond the technical: impact on people, legal compliance, reputation, cost. A useful committee combines four sensibilities: technical (engineering, data, security), legal and compliance, business and risk, and an ethics or HR voice representing the affected employee or customer.

Ideal size is small: five to seven people with decision power, not fifteen in advisory role. The cadence that works is monthly meeting with prior reading, and extraordinary sessions when a case requires quick decision. It must have a written mandate clarifying what it decides itself, what it escalates to the executive committee, and what is operational.

A question worth resolving early is where it hangs organizationally. I’ve seen three patterns: reporting to the CIO or CTO, to the CDO or data leader, or to the compliance director. There’s no universally better option; it depends on the maturity of AI use in the company and internal power balance. What is harmful is reporting to the area proposing the most AI cases, because it creates structural conflict of interest.

Acceptable use policy, not just security policy

The AI use policy must cover things a traditional IT security policy doesn’t. What data types can be sent to which provider (and on what contractual basis), which automated decisions require human oversight, how prompts and outputs are documented when part of a business flow, what generative AI use is compatible with confidentiality commitments signed with clients.

A good pattern is separating three layers in the policy. The approved tools layer (which specific products can be used, on which corporate account, under which license). The permitted use cases layer (internal email drafting, programming assistance, summaries, translation) differentiated from those requiring explicit authorization (customer data analysis, decisions about people, direct customer contact). And the cross-cutting obligations layer (marking generated content, preserving evidence, not loading secrets into prompts).

A temptation to avoid is writing a very long policy no one will read. Better a short policy, readable in twenty minutes, accompanied by concrete examples per functional area. The AI Act’s literacy requirement is better met with workshops and real cases than with a fifty-page PDF buried in the intranet.

Systems inventory: the invisible foundation

Without an AI systems inventory there’s no possible governance. It seems obvious and yet, in most organizations, the inventory doesn’t exist or is outdated. The AI Act mandates registering high-risk systems, but useful governance requires registering all of them, not just the high-risk ones.

Minimum fields for a usable registry are the system name and operating area, the use case, underlying models and providers, training or fine-tuning data if any, inference data it accesses, output recipients, internal risk classification, date of last review, and responsible person. With that registry you can answer important questions: which systems make decisions about people, which send personal data to a third party, which will be covered by high-risk obligations when they enter into force.

The hard part of the inventory is keeping it alive. My recommendation is integrating registration into the purchasing cycle and the change cycle: any new software including an AI component goes through a form updating the inventory, and any significant change to an existing system reflects in its record. The inventory is not a file someone maintains by hand; it’s the consequence of well-defined operational processes.

Risk assessment and escalation

The AI Act classifies systems into four levels: unacceptable risk (prohibited), high risk, limited risk (transparency obligation), and minimal risk. In enterprise practice, internal risk assessment is useful even when the system doesn’t fall in any regulated category, because it helps focus control effort.

A framework that works for me evaluates five dimensions per system. Impact on people’s rights (employees, customers, third parties). Sensitivity of data it processes. Degree of decision autonomy (purely assistive versus deciding without human). Reversibility of effects (easy to undo versus hiring or credit decisions). And external provider dependency and associated risks (contractual lock-in, service continuity, data access).

The combined score across these dimensions guides the required level of controls. Low-risk systems self-manage under the general policy. Medium-risk systems require a formal owner, a monitoring plan, and annual review. High-risk systems pass through committee assessment, detailed impact evaluation (data protection impact assessment plus AI-specific), technical controls like decision traceability and human oversight, and semiannual review.

Audits: internal first, external later

AI audits have two complementary angles. The compliance angle verifies systems align with policies and regulation: consents obtained, data retained only as long as planned, decisions documented, system access limited. The quality angle verifies systems work as expected: error rate, fairness across demographic groups, model drift against training, sufficient explainability for users.

My advice is starting with well-done internal audits before hiring external auditors. The internal team knows the systems and has access to data a third party takes time to obtain. Internal audits build organizational capability and catch problems earlier. External audits are useful when independent validation is needed for clients, regulators, or certifications, but shouldn’t be the first step.

A practice worth adopting is quarterly self-audits based on a short checklist. For each medium or high-risk system, ten or fifteen questions answered with evidence: is the planned human review being executed? are agreed logs preserved? are there undocumented incidents? is the owner current? This generates an attention cadence preventing silent drift.

How to think the decision

Enterprise AI governance isn’t glamorous work, but it determines whether the organization can sustain AI use in coming years without surprises. The AI Act sets a compliance floor, but the deep reason to do this well isn’t the fine: it’s that organizations without inventory, policy, and committee will learn late about problems already happening inside their flows.

My experience is that those starting early (in 2025, ideally now) reach 2026 with the house in order and without stress. Those who postpone end up doing in three months a year and a half of work, with lots of improvisation and concrete risk of accumulated poorly-documented decisions that then require retrospective audit. The difference isn’t how much is spent, but how much is thought before starting.

A last reflection I find important. Governance shouldn’t be a brake on AI use, but an enabler: when rules are clear, teams dare to try more things because they know where they can move. Organizations using governance to forbid everything end with shadow AI (people using ChatGPT from personal phones), which is the worst of all worlds. Those using governance to enable with criteria advance faster than those lacking it altogether.

Entradas relacionadas