🤖 AI Summary
This study addresses the challenge of assessing AI systems’ risks to fundamental rights under the EU AI Act. Methodologically, it proposes a qualitative risk assessment framework integrating the principle of proportionality with defeasible reasoning. The framework employs a layered conceptual model that captures multi-tiered impacts of AI applications on fundamental rights through contextualized rights weighting, dynamic balancing analysis, and mapping to legal norms. Its key contribution is the first application of defeasible reasoning to AI compliance evaluation, enabling formal, logic-based modeling of legal dynamism, rights conflicts, and proportionality review. Designed for high-risk and general-purpose AI systems, the framework provides an actionable foundation for risk assessment, facilitating subsequent formal modeling and algorithmic implementation. It bridges the gap between theoretical research and regulatory practice, thereby advancing a rights-centered, responsible AI governance ecosystem.
📝 Abstract
This chapter introduces a conceptual framework for qualitative risk assessment of AI, particularly in the context of the EU AI Act. The framework addresses the complexities of legal compliance and fundamental rights protection by itegrating definitional balancing and defeasible reasoning. Definitional balancing employs proportionality analysis to resolve conflicts between competing rights, while defeasible reasoning accommodates the dynamic nature of legal decision-making. Our approach stresses the need for an analysis of AI deployment scenarios and for identifying potential legal violations and multi-layered impacts on fundamental rights. On the basis of this analysis, we provide philosophical foundations for a logical account of AI risk analysis. In particular, we consider the basic building blocks for conceptually grasping the interaction between AI deployment scenarios and fundamental rights, incorporating in defeasible reasoning definitional balancing and arguments about the contextual promotion or demotion of rights. This layered approach allows for more operative models of assessment of both high-risk AI systems and General Purpose AI (GPAI) systems, emphasizing the broader applicability of the latter. Future work aims to develop a formal model and effective algorithms to enhance AI risk assessment, bridging theoretical insights with practical applications to support responsible AI governance.