AI Agents Under EU Law

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the compliance challenges faced by AI agents operating within the European Union’s complex, multi-regulatory environment, particularly those arising from behavior drift and insufficient transparency across multi-agent linkages. It presents the first systematic integration of key regulatory and policy instruments—including the Artificial Intelligence Act, the Cyber Resilience Act (CRA), Standardization Request M/613, and the GPAI Code of Conduct—into a unified compliance framework tailored for AI agents. By employing regulatory mapping, a behavioral taxonomy, and data flow tracing, the work establishes correspondences between nine deployment scenarios and relevant legal triggers, and proposes a twelve-step implementation pathway. The research underscores that high-risk AI agents exhibiting untraceable behavior drift cannot satisfy the core requirements of the AI Act, necessitating providers to comprehensively audit their agents’ external behaviors, data flows, interconnected systems, and impacted entities.
📝 Abstract
AI agents - i.e. AI systems that autonomously plan, invoke external tools, and execute multi-step action chains with reduced human involvement - are being deployed at scale across enterprise functions ranging from customer service and recruitment to clinical decision support and critical infrastructure management. The EU AI Act (Regulation 2024/1689) regulates these systems through a risk-based framework, but it does not operate in isolation: providers face simultaneous obligations under the GDPR, the Cyber Resilience Act, the Digital Services Act, the Data Act, the Data Governance Act, sector-specific legislation, the NIS2 Directive, and the revised Product Liability Directive. This paper provides the first systematic regulatory mapping for AI agent providers integrating (a) draft harmonised standards under Standardisation Request M/613 to CEN/CENELEC JTC 21 as of January 2026, (b) the GPAI Code of Practice published in July 2025, (c) the CRA harmonised standards programme under Mandate M/606 accepted in April 2025, and (d) the Digital Omnibus proposals of November 2025. We present a practical taxonomy of nine agent deployment categories mapping concrete actions to regulatory triggers, identify agent-specific compliance challenges in cybersecurity, human oversight, transparency across multi-party action chains, and runtime behavioral drift. We propose a twelve-step compliance architecture and a regulatory trigger mapping connecting agent actions to applicable legislation. We conclude that high-risk agentic systems with untraceable behavioral drift cannot currently satisfy the AI Act's essential requirements, and that the provider's foundational compliance task is an exhaustive inventory of the agent's external actions, data flows, connected systems, and affected persons.
Problem

Research questions and friction points this paper is trying to address.

AI agents
EU AI Act
regulatory compliance
behavioral drift
multi-party action chains
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI Agents
Regulatory Compliance
Behavioral Drift
Risk-Based Framework
Harmonised Standards
🔎 Similar Papers
L
Luca Nannini
Piccadilly Labs, Association of AI Ethicists, Centro Singular de Investigación en Tecnoloxías Intelixentes da USC
A
Adam Leon Smith
Piccadilly Labs, AIQI Consortium
M
Michele Joshua Maggini
Centro Singular de Investigación en Tecnoloxías Intelixentes da USC
E
Enrico Panai
Association of AI Ethicists, BeEthical
S
Sandra Feliciano
INSIGHT – Piaget Research Center for Ecological Human Development
A
Aleksandr Tiulkanov
Responsible Innovations, ForHumanity Europe
E
Elena Maran
Alethesis AI
J
James Gealy
SaferAI
Piercosma Bisconti
Piercosma Bisconti
Assistant Professor, Sapienza University of Rome & DEXAI - Artificial Ethics
Political PhilosophyAI TrustworthinessHuman-Robot interactionsPhilosophy of Technology