Combining Planning and Reinforcement Learning for Solving Relational Multiagent Domains

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address core challenges in relational multi-agent reinforcement learning (MARL)—including state-action space explosion, low sample efficiency due to environmental non-stationarity, and poor cross-task generalization—this paper proposes a novel MARL framework integrating relational planning with state abstraction. Methodologically, it introduces a centralized relational planner operating under the centralized training with decentralized execution (CTDE) paradigm, enabling explicit modeling and efficient abstraction of multi-agent interaction structures. Crucially, the planner requires no task-specific priors, thereby substantially reducing the policy search space and enhancing training stability. Empirically, the approach achieves significant improvements in sample efficiency across multiple relational MARL benchmarks. Moreover, it is the first to demonstrate zero-shot cross-task transfer and robust policy generalization—key milestones for scalable and adaptive multi-agent learning.

Technology Category

Application Category

📝 Abstract
Multiagent Reinforcement Learning (MARL) poses significant challenges due to the exponential growth of state and action spaces and the non-stationary nature of multiagent environments. This results in notable sample inefficiency and hinders generalization across diverse tasks. The complexity is further pronounced in relational settings, where domain knowledge is crucial but often underutilized by existing MARL algorithms. To overcome these hurdles, we propose integrating relational planners as centralized controllers with efficient state abstractions and reinforcement learning. This approach proves to be sample-efficient and facilitates effective task transfer and generalization.
Problem

Research questions and friction points this paper is trying to address.

Addresses MARL's sample inefficiency
Integrates relational planners with RL
Enhances task transfer and generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates relational planners with MARL
Uses efficient state abstractions
Enhances task transfer and generalization
🔎 Similar Papers
No similar papers found.