Assessing High-Risk Systems: An EU AI Act Verification Framework

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The EU AI Act faces challenges including the absence of systematic methodologies for legal compliance verification, heterogeneous national preparedness, and ambiguous regulatory interpretations. To address these, this study proposes the first comprehensive compliance verification framework tailored to high-risk AI systems. Structured along two dimensions—“method type” (governance vs. testing) and “assessment object” (data, model, process, product)—the framework establishes a multi-layered, lifecycle-spanning verification paradigm. It introduces a novel mapping mechanism that systematically translates legal provisions into executable verification activities, integrating compliance engineering, law-technology alignment modeling, standards-mapping matrices, and risk-informed pathway design. The framework significantly reduces regulatory uncertainty, enhances cross-border assessment consistency, and enables coordinated governance among policymakers, auditors, and developers. (149 words)

Technology Category

Application Category

📝 Abstract
A central challenge in implementing the AI Act and other AI-relevant regulations in the EU is the lack of a systematic approach to verify their legal mandates. Recent surveys show that this regulatory ambiguity is perceived as a significant burden, leading to inconsistent readiness across Member States. This paper proposes a comprehensive framework designed to help close this gap by organising compliance verification along two fundamental dimensions: the type of method (controls vs. testing) and the target of assessment (data, model, processes, and final product). Additionally, our framework maps core legal requirements to concrete verification activities, serving as a vital bridge between policymakers and practitioners, and aligning legal text with technical standards and best practices. The proposed approach aims to reduce interpretive uncertainty, promote consistency in assessment practices, and support the alignment of regulatory, ethical, and technical perspectives across the AI lifecycle.
Problem

Research questions and friction points this paper is trying to address.

Lacks systematic approach to verify EU AI Act legal mandates.
Regulatory ambiguity causes inconsistent readiness across Member States.
Needs bridge between legal requirements and technical verification activities.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Framework organizes compliance verification along two dimensions
Maps legal requirements to concrete verification activities
Aligns legal text with technical standards and best practices
🔎 Similar Papers
No similar papers found.
Alessio Buscemi
Alessio Buscemi
Luxembourg Institute of Science and Technology
Large Language ModelsAIMachine LearningAutomotive networks
T
Tom Deckenbrunnen
LIST, University of Luxembourg, Luxembourg
F
Fahria Kabir
Research Institutes of Sweden (RISE), Sweden
N
Nishat Mowla
Research Institutes of Sweden (RISE), Sweden
K
Kateryna Mishchenko
Research Institutes of Sweden (RISE), Sweden