Compositional Shielding and Reinforcement Learning for Multi-Agent Systems

📅 2024-10-14
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-agent systems, existing shielding mechanisms under global safety constraints suffer from exponential computational complexity growth with respect to the number of agents. Method: This paper proposes the first assume-guarantee reasoning–based compositional shielding framework, which soundly decomposes global safety specifications into agent-local shielding obligations. It integrates compositional shielding design, local state abstraction, and cooperative verification, and incorporates safety-guided deep reinforcement learning. Contribution/Results: The framework ensures provable safety and scalable computation. Evaluated on two representative case studies, it reduces shielding computation time from hours to seconds, accelerates policy convergence by 3.2×, and significantly improves the quality of safe policies within fixed training budgets. For the first time, it unifies formal provability and computational scalability in multi-agent safe reinforcement learning—both theoretically and empirically.

Technology Category

Application Category

📝 Abstract
Deep reinforcement learning has emerged as a powerful tool for obtaining high-performance policies. However, the safety of these policies has been a long-standing issue. One promising paradigm to guarantee safety is a shield, which shields a policy from making unsafe actions. However, computing a shield scales exponentially in the number of state variables. This is a particular concern in multi-agent systems with many agents. In this work, we propose a novel approach for multi-agent shielding. We address scalability by computing individual shields for each agent. The challenge is that typical safety specifications are global properties, but the shields of individual agents only ensure local properties. Our key to overcome this challenge is to apply assume-guarantee reasoning. Specifically, we present a sound proof rule that decomposes a (global, complex) safety specification into (local, simple) obligations for the shields of the individual agents. Moreover, we show that applying the shields during reinforcement learning significantly improves the quality of the policies obtained for a given training budget. We demonstrate the effectiveness and scalability of our multi-agent shielding framework in two case studies, reducing the computation time from hours to seconds and achieving fast learning convergence.
Problem

Research questions and friction points this paper is trying to address.

Ensuring safety in multi-agent reinforcement learning systems
Overcoming scalability issues in multi-agent shielding methods
Decomposing global safety specs into local agent obligations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Individual shields per agent for scalability
Assume-guarantee reasoning for global safety
Shielded reinforcement learning improves policy quality
🔎 Similar Papers
No similar papers found.
A
Asger Horn Brorholt
Aalborg University
K
K. G. Larsen
Aalborg University
Christian Schilling
Christian Schilling
Associate Professor at Aalborg University