AI in control: rethinking cybersecurity compliance and auditing

Loumachi, Fatma Yasmine, Lacerda, Marcio J., Ouazzane, Karim, Adnane, Asma and Adamyk, Oksana (2026) AI in control: rethinking cybersecurity compliance and auditing. Information and Software Technology. ISSN 1873-6025 (In Press)

Abstract

Context: Placing Artificial Intelligence (AI) in control of cybersecurity compliance and auditing shifts its role from decision-support to direct execution of regulatory operational processes, where AI outputs may constitute compliance artefacts and audit evidence. This raises the problem of Meta-Compliance, in which not only the organisation but also the AI system must satisfy enforceable requirements. Yet existing frameworks provide no operational criteria for recognising AI as authoritative in such roles. Trustworthy AI principles define high-level Second-Layer requirements but remain non-binding, whereas First-Layer organisational requirements impose explicit justificatory and evidentiary duties.
Objectives
This study investigates the minimal normative conditions under which AI systems can be recognised as authoritative in compliance and auditing, capable of producing evidence valid for assurance.
Methods
Doctrinal analysis is conducted on binding “shall/must” provisions across PCI DSS, DORA, UK GDPR, NIS2, ISO/IEC 27001, and NIST SP 800-53. Provisions are normalised through the compliance–audit chain (requirement → control → rule → evidence) and mapped against Second-Layer AI governance requirements. The result is the Compliance–Audit Authority Benchmark (CAAB), comprising six criteria: Traceability, Explainability, Evidence Integrity, Adaptability, Action Governance, and Reasoning.
Results
Applying CAAB across AI model families and architectures shows that symbolic and knowledge-representation methods satisfy most criteria intrinsically, whilst neural, deep, and generative models do not unless supported by external governance mechanisms. This exposes a structural gap between First-Layer organisational requirements and Second-Layer AI requirements, clarifying that authority rests on evidentiary guarantees rather than statistical accuracy.
Conclusion
The study formalises Meta-Compliance as the recursive structure in which both organisations and AI systems become subjects of assurance. CAAB defines the minimum conditions for recognising AI as authoritative, whilst the proposed Verifiable Reasoning Architecture (VRA) may offer a pathway toward AI systems anchored in secured evidence, reproducible inference, and symbolic governance, establishing audit-ready authority in high-risk contexts.

Documents
11363:56898
[thumbnail of AIinControl.pdf]
AIinControl.pdf - Accepted Version
Restricted to Repository staff only until 18 March 2028.
Available under License Creative Commons Attribution Non-commercial No Derivatives 4.0.

Download (800kB) | Request a copy
Details
Record
View Item View Item