Unravelling Responsibility for AI

📅 2023-08-04
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Ambiguity in attributing responsibility for AI system failures impedes accountability and redress. Method: This study establishes the first systematic conceptual framework for AI responsibility, formalizing responsibility as a ternary relation “Agent A is responsible for Outcome O.” It decomposes responsibility along three dimensions—responsible agent, responsibility type (e.g., causal, moral, legal), and outcome scope—revealing their inherent variability. The framework integrates a scalable responsibility ontology, a visualizable responsibility graph, and a scenario-driven analytical paradigm. Contribution/Results: Through conceptual modeling, semantic framework design, and empirical case analysis—including a hypothetical autonomous vessel collision—the study maps and analyzes the distributed, often conflicting responsibilities of developers, operators, regulators, and other stakeholders across multiple responsibility dimensions. It provides both theoretical foundations and practical tools to inform AI governance policy and responsible AI engineering practice.
📝 Abstract
It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems. This is important to achieve justice and compensation for victims of AI harms, and to inform policy and engineering practice. But without a clear, thorough understanding of what `responsibility' means, deliberations about where responsibility lies will be, at best, unfocused and incomplete and, at worst, misguided. Furthermore, AI-enabled systems exist within a wider ecosystem of actors, decisions, and governance structures, giving rise to complex networks of responsibility relations. To address these issues, this paper presents a conceptual framework of responsibility, accompanied with a graphical notation and general methodology, for visualising these responsibility networks and for tracing different responsibility attributions for AI. Taking the three-part formulation 'Actor A is responsible for Occurrence O,' the framework unravels the concept of responsibility to clarify that there are different possibilities of who is responsible for AI, senses in which they are responsible, and aspects of events they are responsible for. The notation allows these permutations to be represented graphically. The methodology enables users to apply the framework to specific scenarios. The aim is to offer a foundation to support stakeholders from diverse disciplinary backgrounds to discuss and address complex responsibility questions in hypothesised and real-world cases involving AI. The work is illustrated by application to a fictitious scenario of a fatal collision between a crewless, AI-enabled maritime vessel in autonomous mode and a traditional, crewed vessel at sea.
Problem

Research questions and friction points this paper is trying to address.

Clarifying responsibility attribution for AI system outputs and impacts
Visualizing complex responsibility networks in AI ecosystems
Providing methodology to analyze responsibility in real-world AI cases
Innovation

Methods, ideas, or system contributions that make the work stand out.

Conceptual framework for AI responsibility analysis
Graphical notation for responsibility networks visualization
Methodology for applying framework to scenarios
🔎 Similar Papers
No similar papers found.
Zoe Porter
Zoe Porter
University of York
Ethics of AIPhilosophy of AIAutonomous SystemsAssuranceResponsibility
J
Joanna Al-Qaddoumi
York Law School, University of York, UK
P
P. Conmy
P
Phillip Morgan
York Law School, University of York, UK
J
J. McDermid
Department of Computer Science, University of York, UK
I
I. Habli
Department of Computer Science, University of York, UK