The Gap Between Engagement and Oversight
Most boards are now receiving AI-related information: investment proposals, product roadmaps, periodic risk briefings. This is progress. But receiving information is not the same as having the governance structures to evaluate it, challenge it, or act on it when something goes wrong.
Effective AI oversight requires specific capabilities at board level: clarity on what AI systems the organization operates, defined accountability when those systems fail, tested incident response structures, and a coherent connection between AI strategy and organizational values. In most organizations, these capabilities are partially developed at best.
The five questions below reflect the gaps most consistently identified in AI governance readiness reviews across regulated industries. None has a comfortable answer. All require honest internal inquiry before they can be addressed.
1. Do We Have a Complete, Risk-Classified Inventory of Our AI Systems?
Governance begins with visibility. Before a board can oversee AI risk, it needs to know what AI systems the organization operates, where they are deployed, and how they are classified under applicable regulation.
The EU AI Act establishes a tiered risk framework. High-risk applications — in areas including credit decisioning, employment screening, healthcare diagnostics, and critical infrastructure management — carry significant compliance obligations: conformity assessments, technical documentation requirements, human oversight protocols, and post-market monitoring. Regulatory expectations increasingly point toward organizations having a defensible classification methodology, not just a list of tools.
In practice, many organizations have deployed systems in high-risk categories without formally recognising them as such. And in most cases, whatever inventory exists is not maintained in a form accessible to the board.
The actionable step: Establish a centralised AI system inventory with defined ownership, a classification methodology aligned to applicable regulation, and a clear refresh cadence. This is the foundational control on which all other governance depends.
2. Who Is Accountable — Not Just Responsible — When an AI System Causes Harm?
Responsibility and accountability are not interchangeable. In most AI operating models, responsibility is distributed across functions: data science teams develop models, product teams deploy them, operations teams run them. When a high-stakes automated decision produces a harmful outcome, the question of who is accountable — who holds the authority and the obligation to answer for that outcome — is frequently unanswerable in practice.
The EU AI Act requires clear accountability structures for high-risk AI systems. Beyond regulatory compliance, however, accountability ambiguity is a governance failure in its own right.
For each material AI deployment, the board should be able to identify the individual or governance body accountable for its performance, its risks, and its outcomes. That person or body should have both the information access and the authority to act — and should understand they hold that accountability.
Where accountability is unclear, an AI incident will expose the gap quickly: in front of regulators, customers, and the press.
3. Has Governance Been Stress-Tested Against a Real Failure Scenario?
Governance frameworks look coherent on paper. The test is whether they function under pressure.
Consider a concrete scenario: an automated decision system — used in a high-volume customer process — is found to have produced systematically inconsistent outcomes across a protected group. The story is about to break publicly. Walk through the response.
- Who is notified first, and within what timeframe?
- What decisions need to be made in the first 24 hours, and by whom?
- How does the organization communicate with affected individuals?
- When and how are regulators informed?
- Who holds the authority to suspend the system while investigation is underway?
Organizations that have conducted this exercise — even informally, as a tabletop scenario — consistently identify significant gaps. The gaps are almost never in the technical infrastructure. They are in the decision-making structures, escalation paths, and communication protocols: the governance layer, not the model layer.
A structured tabletop exercise, with board and senior leadership involvement, is one of the highest-return investments an organization can make in AI governance readiness.
4. Do Employees Have a Clear Channel to Raise AI Concerns?
The EU AI Act includes provisions on human oversight and the right of individuals affected by automated decisions to seek explanation and redress. Effective internal governance requires more: a culture where employees who observe AI systems operating incorrectly, unfairly, or inconsistently with stated policy have a clear and trusted mechanism for raising those concerns.
Three operational questions matter here:
- Is there a named function that receives AI-related concerns from employees?
- Do employees know it exists and how to use it?
- When concerns are raised, how are they evaluated, escalated, and resolved?
Where this channel is absent or unclear, AI risk is effectively invisible until it becomes a crisis. Employees who observe problems will either raise them informally, creating inconsistency and exposure, or not raise them at all. Neither outcome supports the oversight the board requires.
5. Is Your AI Strategy Coherent With the Organization You Say You Are?
This is the question most boards are slowest to engage — and the most consequential.
Every organization holds stated commitments: to fairness, to transparency, to customer trust, to equitable treatment. Most organizations are also deploying AI systems that, examined closely, create tension with at least some of those commitments.
Models trained on historical data can systematically perpetuate patterns the organization has publicly committed to addressing. Automated decisions can remove the human judgement that the organization's own policies require in complex cases. Efficiency gains in one function can reduce the transparency that customers and regulators increasingly expect as a baseline.
The board's role is not to approve every individual AI deployment decision. It is to ensure that the organization's AI strategy — its aggregate use of AI across functions and geographies — is coherent with its stated identity, its regulatory obligations, and its commitments to stakeholders. That conversation requires honest answers to uncomfortable questions, and a governance structure with the authority to act on what it finds.
A Practical Starting Point
None of these questions resolves in a single board session. But asking them formally — with a structured follow-up process and named accountability for resolution — is the difference between AI governance that functions and AI governance that exists only on paper.
Organizations that build this capability before an incident are in a materially different position from those that wait. Regulatory investigations, reputational incidents, and operational failures are significantly more costly to manage than the governance infrastructure required to prevent them.
If your board is conducting an AI governance review or building an AI risk framework, a structured readiness assessment against these five dimensions is a practical starting point. We work with organizations at every stage of this process.