🤖 AI Summary
This study addresses the lack of a clear definition and implementation pathway for accountability mechanisms among agents in multi-agent systems. It pioneers the transfer of accountability concepts from human organizational contexts into the internal architecture of multi-agent systems. Through an interdisciplinary literature review, the work distills a coherent conceptualization of accountability and integrates it with formal modeling and real-world application scenarios to demonstrate how accountability processes enhance system transparency and enable responsibility tracing. The research identifies key challenges in designing accountable agents and proposes preliminary solutions alongside a roadmap for future inquiry, thereby establishing a theoretical foundation for enabling autonomous agents to participate meaningfully in accountability within open socio-technical systems.
📝 Abstract
AI systems are becoming increasingly complex, ubiquitous and autonomous, leading to increasing concerns about their impacts on individuals and society. In response, researchers have begun investigating how to ensure that the methods underlying AI decision-making are transparent and their decisions are explainable to people and conformant to human values and ethical principles. As part of this research thrust, the need for accountability within AI systems has been noted, but this notion has proven elusive to define; we aim to address this issue in the current paper. Unlike much recent work, we do not address accountability within the human organisational processes of developing and deploying AI; rather we consider what it would it mean for the agents within a multi-agent system (MAS), potentially including human agents, to be accountable to other agents or to have others accountable to them.
In this work, we make the following contributions: we provide an in-depth survey of existing work on accountability in multiple disciplines, seeking to identify a coherent definition of the concept; we give a realistic example of a multi-agent system application domain that illustrates the benefits of enabling agents to follow accountability processes, and we identify a set of research challenges for the MAS community in building accountable agents, sketching out some initial solutions to these, thereby laying out a road-map for future research. Our focus is on laying the groundwork to enable autonomous elements within open socio-technical systems to take part in accountability processes.