<340 ns Per decision
<600 ps Logical op
2.9M/sec Throughput
On-premise native No cloud runtime

Regulatory Context — EU AI Act

Informational overview of the EU regulatory framework and its correspondence with structural properties of the decision infrastructure.

AI Act — Status & Timeline

The main EU framework is Regulation (EU) 2024/1689.

Entered into force on 1 August 2024.
Application is staggered: some provisions already apply, others are scheduled for 2 August 2026 and 2 August 2027.

On 19 November 2025, the European Commission published a proposal to amend the AI Act (“Digital Omnibus on AI”). This proposal is going through the legislative procedure and is not yet in force. Timelines and obligations may evolve.

Architectural Correspondence

High-level mapping between regulatory expectations and structural properties.

Regulatory expectation Structural property
Decision traceability Deterministic evaluation linked to explicit policy
Record retention Append-only ledger
Transparency Exportable, verifiable evidence artifacts
Human oversight Explicit routing of Indeterminate (I) outcomes
Access governance Policy-bound authorization
Staged adoption Observe / Compare / Apply with rollback
Post-market monitoring (Art. 72) KL divergence drift detection, T/F/I distribution, escalation metrics
Explainability (Art. 13) Deterministic multilingual explanations (FR/DE/IT/EN), zero LLM
Risk management (Art. 9) AI System Risk Registry with Annex III auto-classification
Annex III classification 15-question questionnaire, automatic obligation mapping Art. 9–72
Serious incident reporting (Art. 73) Automated incident detection, severity classification, deadline tracking, authority notification
Multi-regulation compliance Cross-regulation overlay: 8 frameworks (EU AI Act, DORA, MiFID II, FINMA, GDPR, LPD/nDSG, SOX, Basel III), 27 articles
AI system transparency AI Bill of Materials (SPDX 3.0): per-decision and per-system component inventory with integrity verification
External verifiability Public Verification Portal: zero-knowledge decision verification via Merkle inclusion proofs

Proven Compliance, Not Declared Compliance

Traditional compliance relies on documentation: policies written, checklists completed, audits passed at a point in time. OmegaOS™ produces structural compliance — every decision carries its own cryptographic proof of correctness. The evidence is not reconstructed after the fact. It is generated at decision time, recorded immutably, and exportable on demand.

This distinction matters when a decision must be defended before a regulatory authority. A checklist states that a process existed. A decision proof demonstrates that it was followed — with the evidence that was present, under the rules that applied, at the exact moment the decision was made.

Post-Market Surveillance (Art. 72)

EU AI Act Art. 72 requires high-risk AI system providers to “actively and systematically collect, document and analyse data on the performance” throughout the system’s lifecycle — operational deadline August 2, 2026.

OmegaOS™ measures decision distribution (True / False / Indeterminate) over rolling time windows and detects drift via KL divergence between baseline and current distribution. Human escalation resolution times are tracked at p50, p95, and p99 percentiles. When drift exceeds configurable policy thresholds, automated alerts are triggered.

Metric Method
Decision distribution T/F/I counts over configurable rolling windows
Drift detection KL divergence with Laplace smoothing
Escalation latency p50 / p95 / p99 resolution times
Alert thresholds Configurable per policy, automated escalation

No data scientist required. Drift detection is deterministic signal processing — not ML inference.

AI System Risk Registry (Annex III)

Automatically classify your AI systems by risk level (Prohibited / High / Limited / Minimal) using a structured 15-question questionnaire aligned with EU AI Act Annex III. High-risk systems are automatically assigned applicable obligations (Art. 9 through Art. 72) with per-article compliance status tracking.

Capability Implementation
Risk classification 15-question Annex III questionnaire → automatic risk level
Obligation mapping Art. 9–72 per-system, auto-generated for high-risk
Compliance tracking Per-article status: Not Addressed / In Progress / Compliant
Assessment history Every classification recorded with timestamp and rationale

From Obligations to Architecture

The EU AI Act’s core obligations for high-risk systems are not a checklist. They are architectural requirements. OmegaOS™ implements each as a technical property.

Article Obligation OmegaOS™ property
Art. 9 Risk management system Risk Registry + Ascension coherence audit
Art. 12 Automatic logging / traceability Append-only Decision Evidence Log
Art. 13 Transparency Deterministic multilingual explanations
Art. 14 Human oversight Trilean logic: Indeterminate forces escalation
Art. 15 Accuracy, robustness, security Formal invariants (TLA+), Merkle integrity, Ed25519
Art. 72 Post-market surveillance KL drift monitoring, T/F/I distribution, automated alerts
Art. 73 Serious incident reporting Automated detection, severity-tiered deadlines, report generation, authority notification

Compliance is verifiable in code — not in a document.

Multi-Regulation Compliance

Regulatory obligations do not come from a single framework. Banks face EU AI Act, DORA, MiFID II, FINMA, GDPR, LPD/nDSG, SOX, and Basel III simultaneously. OmegaOS™ evaluates each decision against all applicable frameworks in a single pass.

Framework Articles covered
EU AI Act Art. 9, 12, 13, 14, 15, 72, 73
GDPR Art. 6, 22, 25, 35
DORA Art. 6, 8, 11, 17, 28
MiFID II Art. 16, 25, 27
FINMA Circular 2023/1, Model Risk
Swiss nLPD Art. 22
SOX / Basel III Operational risk, audit trail requirements

27 articles. 8 frameworks. One compliance matrix per tenant. Real-time gap analysis.

Serious Incident Reporting (Art. 73)

EU AI Act Art. 73 requires providers and deployers of high-risk AI systems to report serious incidents to national competent authorities. Deadlines are strict: initial report within days, complete report with corrective measures to follow.

OmegaOS™ automates the entire pipeline: incident detection from drift alerts or manual filing, severity classification with computed deadlines (2/10/15 days), initial and complete report generation, and structured authority notification with full audit trail.

Capability Implementation
Detection Automated from drift alerts or manual filing with linked decisions
Classification Severity tiers: Critical (2 days), High (10 days), Standard (15 days)
Report generation Initial report + complete report with corrective actions
Authority notification Structured pipeline with timeline tracking and overdue alerts

AI Bill of Materials (SPDX 3.0)

Transparency obligations require documented knowledge of every AI system component. OmegaOS™ generates a structured AI Bill of Materials compliant with the SPDX 3.0 AI Profile standard — per decision and per AI system.

Each BOM links the decision to the complete component chain: models, datasets, training methods, evidence sources, and risk assessments. Integrity is verified via cryptographic hash. Export formats: SPDX 3.0 JSON-LD, JSON compact, Markdown.

Disclaimer

This page is provided for informational purposes only. It does not constitute legal advice and does not imply certification, approval, or formal compliance.

Obligations depend on role (provider, deployer, importer), use case, and jurisdiction.
Consult qualified legal counsel.

Official Sources

Discuss your requirements

Contact the engineering team to discuss regulatory alignment for your operational context.

Contact