Foundations for Risk Assessment of AI in Protecting Fundamental Rights

📅 2025-07-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of assessing AI systems’ risks to fundamental rights under the EU AI Act. Methodologically, it proposes a qualitative risk assessment framework integrating the principle of proportionality with defeasible reasoning. The framework employs a layered conceptual model that captures multi-tiered impacts of AI applications on fundamental rights through contextualized rights weighting, dynamic balancing analysis, and mapping to legal norms. Its key contribution is the first application of defeasible reasoning to AI compliance evaluation, enabling formal, logic-based modeling of legal dynamism, rights conflicts, and proportionality review. Designed for high-risk and general-purpose AI systems, the framework provides an actionable foundation for risk assessment, facilitating subsequent formal modeling and algorithmic implementation. It bridges the gap between theoretical research and regulatory practice, thereby advancing a rights-centered, responsible AI governance ecosystem.

Technology Category

Application Category

📝 Abstract
This chapter introduces a conceptual framework for qualitative risk assessment of AI, particularly in the context of the EU AI Act. The framework addresses the complexities of legal compliance and fundamental rights protection by itegrating definitional balancing and defeasible reasoning. Definitional balancing employs proportionality analysis to resolve conflicts between competing rights, while defeasible reasoning accommodates the dynamic nature of legal decision-making. Our approach stresses the need for an analysis of AI deployment scenarios and for identifying potential legal violations and multi-layered impacts on fundamental rights. On the basis of this analysis, we provide philosophical foundations for a logical account of AI risk analysis. In particular, we consider the basic building blocks for conceptually grasping the interaction between AI deployment scenarios and fundamental rights, incorporating in defeasible reasoning definitional balancing and arguments about the contextual promotion or demotion of rights. This layered approach allows for more operative models of assessment of both high-risk AI systems and General Purpose AI (GPAI) systems, emphasizing the broader applicability of the latter. Future work aims to develop a formal model and effective algorithms to enhance AI risk assessment, bridging theoretical insights with practical applications to support responsible AI governance.
Problem

Research questions and friction points this paper is trying to address.

Develops a framework for AI risk assessment under EU AI Act
Addresses legal compliance and fundamental rights protection complexities
Proposes methods for analyzing AI impacts on rights dynamically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates definitional balancing and defeasible reasoning
Employs proportionality analysis for rights conflicts
Layered approach for high-risk and GPAI systems
🔎 Similar Papers
No similar papers found.
A
Antonino Rotolo
Department of Legal Studies and Alma AI, University of Bologna, Bologna, Italy
B
Beatrice Ferrigno
Department of Legal Studies and Alma AI, University of Bologna, Bologna, Italy
J
Jose Miguel Angel Garcia Godinez
Department of Legal Studies and Alma AI, University of Bologna, Bologna, Italy
Claudio Novelli
Claudio Novelli
Yale University
Legal philosophyPolitical PhilosophyPhilosophy of TechnologyEthics
Giovanni Sartor
Giovanni Sartor
Università di Bologna, European University Institute
lawlegal theoryartificial intelligence