What the Act Actually Requires
The EU AI Act classifies AI systems by risk tier and assigns obligations accordingly. High-risk systems — including those used in employment screening, credit assessment, critical infrastructure management, and access to public services — must meet requirements across five domains:
- Transparency: documented communication to affected individuals about how AI-influenced decisions are made, not solely algorithmic explainability.
- Human oversight: governance structures that give accountable individuals the authority and practical ability to review, challenge, and override system outputs.
- Data governance: controls over training data quality, representativeness, and bias management throughout the system lifecycle.
- Documentation and logging: technical and process records sufficient to support audit and regulatory review.
- Accountability: a named individual or body with defined responsibility for each high-risk system's compliance posture.
None of these requirements resolve cleanly inside a technology function. Each one touches legal, risk, operations, and executive leadership. Treating them as a technical checklist creates a structural accountability gap from the outset.
The Questions That Require Executive Ownership
Regulatory expectations increasingly point toward board-level engagement on AI risk — not periodic briefings, but active governance. Three questions in particular require executive ownership.
Who is accountable when an AI-assisted decision causes harm? This requires a documented answer: a specific role with defined authority, not an implicit assumption that accountability is shared. General Counsel, the CRO, and relevant business line owners need to establish this before systems enter production, not after an incident.
What is the organization's risk appetite for model error? Every AI system operating in a high-stakes decision domain will produce errors. The relevant question is not whether errors occur, but what level of error rate — and what type of error — is acceptable given the consequences for affected individuals. That is a risk appetite question. It belongs in the risk framework, alongside credit and operational risk tolerances, with appropriate governance sign-off.
How does AI governance integrate with existing risk infrastructure? The organizations navigating this effectively are not building separate AI ethics functions disconnected from the business. They are extending existing model risk management frameworks, integrating AI oversight into established risk committees, and applying familiar accountability structures to new system types. This reduces implementation drag and makes use of institutional knowledge already present in risk and compliance functions.
A Governance Baseline: Five Actions Before Production
For any high-risk AI system entering or already in deployment, a credible governance posture requires at minimum:
- System classification: confirm whether the system falls within a high-risk category under the Act, with both legal and technical input. This determination should not be made unilaterally by the technology function.
- Accountability mapping: assign a named owner for each in-scope system, with documented authority to suspend deployment if compliance conditions are not met.
- Oversight design: define what human review looks like operationally — who reviews outputs, at what frequency, using what information, and with what decision rights. A review interface is not a governance structure.
- Risk appetite documentation: record the organization's stated tolerance for false positives, false negatives, and disparate impact, and align this with the relevant business line and risk committee.
- Incident response integration: establish how an AI-related adverse event would be escalated, documented, reported to regulators where required, and communicated to affected individuals.
This framework does not resolve all compliance questions — regulatory interpretation of the Act continues to develop — but it establishes a defensible governance baseline and makes accountability visible across the organization.
The Cost of Deferral
The Act carries penalties of up to €35 million or 7% of global annual turnover for the most serious violations. For most organizations, however, the more proximate risk is not a first-cycle regulatory fine. It is an operational failure in a high-stakes decision domain before governance infrastructure is in place.
A hiring system that demonstrably disadvantages a protected group. A credit model that produces unexplainable rejections at scale. A public-sector benefits system that cannot account for its own outputs. These incidents carry reputational and legal consequences that persist well beyond any regulatory proceeding.
Organizations that have integrated AI governance into their risk operating model — rather than treating it as a bounded project — are better positioned to detect these issues before they become visible externally. That is not solely a compliance advantage. It is an operational one.
Where to Begin
The EU AI Act gives boards both a mandate and a structured reason to have a governance conversation that was previously easy to defer.
That conversation does not begin with a technology gap analysis. It begins with a straightforward accountability question: for each AI system operating in a consequential domain, who is responsible, what authority do they hold, and how would the organization know if something went wrong?
The organizations that answer that question clearly and build governance structures to match will be better prepared for regulatory scrutiny — and better protected against the operational risks that precede it.