Getting this right has both regulatory and commercial consequences.
The EU AI Act: Prescriptive, Binding, and Extraterritorial
The EU AI Act establishes the most comprehensive binding AI governance framework currently in force. Its central mechanism is a risk-based classification system. High-risk AI applications — spanning categories such as critical infrastructure management, employment decision-support, and access to essential services — carry mandatory obligations across six domains: transparency, data governance, human oversight, accuracy, cybersecurity, and post-market monitoring.
For providers and deployers of high-risk systems, this translates into concrete operational requirements: registration in an EU database, technical documentation sufficient to demonstrate conformity, functioning risk management systems, and ongoing incident reporting obligations.
The Act has extraterritorial reach. Organizations headquartered outside the EU that deploy AI systems affecting EU residents or EU markets are within scope. Non-compliance carries financial penalties scaled to global annual turnover.
For large organizations, compliance requires structured investment across legal, technical, and governance functions. For mid-sized organizations, the challenge is proportionality — EU obligations were largely designed with enterprise-scale systems in mind, and calibrating them to organizational context requires deliberate interpretation.
The MENA Landscape: Principle-Based, Varied, and Moving Quickly
The MENA region does not have a single AI governance framework. What it has is a diverse and rapidly evolving landscape of national AI strategies, sector-specific regulatory guidance, and principle-based soft-law frameworks, with meaningful variation between jurisdictions.
Several Gulf states have published national AI strategies with governance components addressing transparency, accountability, and public-sector AI deployment. Morocco has developed data protection regulation with GDPR-aligned provisions under Law No. 09-08, and has been an active participant in international digital governance dialogue. Across the region, financial regulators, healthcare authorities, and telecommunications bodies have issued sector-specific AI guidance.
The direction of travel is clearly toward more formalised, binding AI regulation. Multiple MENA jurisdictions are in active consultation on dedicated AI frameworks. The relevant question for organizations operating in the region today is not whether binding regulation is coming — it is whether their governance foundations are in place before it arrives.
The Structural Difference That Matters Most
The most practically significant difference between the two environments is the mode of accountability they create.
The EU AI Act is prescriptive. It specifies, in considerable procedural detail, what must be documented, tested, monitored, and reported. Compliance is demonstrable against defined criteria.
Most current MENA frameworks are principle-based. They articulate objectives — fairness, transparency, human-centerdness — without specifying detailed technical or procedural implementation requirements. Compliance, in this environment, is demonstrated through behavior and organizational posture.
This distinction creates a specific risk for organizations that default to EU compliance as their enterprise standard. Satisfying the EU's prescriptive requirements does not automatically satisfy the contextual, relationship-oriented expectations of MENA regulators. Organizations that treat EU compliance as the ceiling — rather than the floor — of their governance investment risk deprioritising the stakeholder engagement, contextual transparency, and genuine human oversight mechanisms that MENA regulators are actively assessing.
A Cross-Jurisdictional Governance Architecture: What Works
Organizations navigating this challenge effectively share a common approach. They are not maintaining separate governance frameworks for each jurisdiction. They are building a coherent foundational architecture and applying it with appropriate contextual calibration.
The characteristics of a governance architecture that holds up across both environments are:
- Documented accountability. Clear RACI structures across the AI lifecycle — from procurement and development through deployment and monitoring — that satisfy EU requirements for demonstrable human oversight and fulfil the accountability expectations embedded in MENA principles.
- Model risk management controls. Formal validation, documentation, and monitoring processes for AI systems in scope. These address EU technical documentation requirements and establish the audit-readiness that emerging MENA regulation is likely to require.
- Incident escalation and reporting pathways. Defined processes for identifying, escalating, and reporting AI-related failures or adverse outcomes. These are already mandatory under the EU Act for high-risk systems; establishing them now positions organizations well for MENA regulatory convergence.
- Stakeholder engagement protocols. Structured engagement with regulators, affected communities, and internal users. In principle-based regulatory environments, demonstrated engagement often carries more weight than documented policy.
- Lifecycle governance checkpoints. Formal review and approval gates at key stages of the AI lifecycle — procurement, deployment, material change, and periodic review — that provide a consistent control framework regardless of jurisdiction.
The Commercial Dimension
Beyond regulatory compliance, coherent cross-jurisdictional AI governance is increasingly a factor in commercial relationships. Clients, procurement functions, and institutional partners on both sides of the Mediterranean are applying more rigorous scrutiny to how organizations govern their AI systems. Investment counterparties are incorporating AI governance into due diligence processes.
Organizations that can demonstrate a governance architecture that is coherent, independently verifiable, and genuinely applied — rather than jurisdiction-specific or cosmetic — occupy a distinctive position in this environment. Regulatory trust, once built, is a durable commercial asset.
Where to Start
For organizations with operations across both regions, a useful diagnostic covers three questions:
- Have you mapped your AI systems against the EU AI Act's risk classification — and identified which systems are also in scope for applicable MENA sector regulation?
- Do you have documented accountability and model risk controls that could withstand regulatory scrutiny in either environment?
- Is your governance architecture designed to accommodate the binding MENA frameworks currently in development — or will it require significant retrofit when those frameworks arrive?
If your organization is navigating cross-jurisdictional AI governance, we are available to discuss your specific context.
This article reflects general practitioner perspectives on AI governance and regulatory trends. It does not constitute legal advice. Organizations should seek qualified legal counsel regarding jurisdiction-specific compliance obligations.