Robustness and Cybersecurity in the EU Artificial Intelligence Act

📅 2025-02-22
📈 Citations: 1
✨ Influential: 1
📄 PDF
🤖 AI Summary
The EU AI Act (AIA) exhibits structural deficiencies—particularly ambiguous legal definitions and insufficient technical specifications—in its robustness and cybersecurity requirements for high-risk AI systems (Art. 15) and general-purpose AI models (Art. 55). This paper is the first to systematically identify and analyze the law–technology gap embedded in these provisions. Leveraging interdisciplinary analysis—including statutory interpretation, machine learning robustness theory (e.g., adversarial robustness, out-of-distribution generalization), and cybersecurity practice—we assess the operational feasibility of the requirements. Our contribution is a cross-disciplinary compliance framework that delivers actionable recommendations for the European Commission’s guidance documents, harmonized standard development, and the benchmarking methodology stipulated under AIA Art. 15(2). By aligning legal terminology with empirically grounded ML security research, the framework advances precise, implementation-ready resilience governance—thereby addressing a critical gap in the AIA’s regulatory architecture.

Technology Category

Application Category

📝 Abstract
The EU Artificial Intelligence Act (AIA) establishes different legal principles for different types of AI systems. While prior work has sought to clarify some of these principles, little attention has been paid to robustness and cybersecurity. This paper aims to fill this gap. We identify legal challenges and shortcomings in provisions related to robustness and cybersecurity for high-risk AI systems (Art. 15 AIA) and general-purpose AI models (Art. 55 AIA). We show that robustness and cybersecurity demand resilience against performance disruptions. Furthermore, we assess potential challenges in implementing these provisions in light of recent advancements in the machine learning (ML) literature. Our analysis informs efforts to develop harmonized standards, guidelines by the European Commission, as well as benchmarks and measurement methodologies under Art. 15(2) AIA. With this, we seek to bridge the gap between legal terminology and ML research, fostering a better alignment between research and implementation efforts.
Problem

Research questions and friction points this paper is trying to address.

Addresses robustness and cybersecurity in AI Act
Identifies legal challenges in high-risk AI systems
Bridges legal terminology with machine learning research
Innovation

Methods, ideas, or system contributions that make the work stand out.

Addresses AI robustness and cybersecurity
Bridges legal and ML research
Proposes harmonized standards and guidelines
🔎 Similar Papers
No similar papers found.