Scalable Multiagent Reinforcement Learning with Collective Influence Estimation

📅 2026-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of scaling multi-agent reinforcement learning to real-world robotic systems under communication constraints. Existing approaches often rely on explicit exchange of actions or states, or require estimation networks that grow with the number of agents, hindering scalability. To overcome these limitations, the authors propose the Collective Influence Estimation Network (CIEN) framework, which enables each agent to collaborate effectively using only local observations and the state of the shared task object, without explicit inter-agent communication. By modeling the collective influence of other agents on the task object, CIEN avoids network complexity that scales with team size and allows new agents to join seamlessly. Integrated with the Soft Actor-Critic (SAC) algorithm and a local observation-driven mechanism, the method achieves stable coordination under limited communication, as demonstrated by real-robot experiments showing strong robustness, minimal communication dependency, and favorable deployment performance.

Technology Category

Application Category

📝 Abstract
Multiagent reinforcement learning (MARL) has attracted considerable attention due to its potential in addressing complex cooperative tasks. However, existing MARL approaches often rely on frequent exchanges of action or state information among agents to achieve effective coordination, which is difficult to satisfy in practical robotic systems. A common solution is to introduce estimator networks to model the behaviors of other agents and predict their actions; nevertheless, such designs cause the size and computational cost of the estimator networks to grow rapidly with the number of agents, thereby limiting scalability in large-scale systems. To address these challenges, this paper proposes a multiagent learning framework augmented with a Collective Influence Estimation Network (CIEN). By explicitly modeling the collective influence of other agents on the task object, each agent can infer critical interaction information solely from its local observations and the task object's states, enabling efficient collaboration without explicit action information exchange. The proposed framework effectively avoids network expansion as the team size increases; moreover, new agents can be incorporated without modifying the network structures of existing agents, demonstrating strong scalability. Experimental results on multiagent cooperative tasks based on the Soft Actor-Critic (SAC) algorithm show that the proposed method achieves stable and efficient coordination under communication-limited environments. Furthermore, policies trained with collective influence modeling are deployed on a real robotic platform, where experimental results indicate significantly improved robustness and deployment feasibility, along with reduced dependence on communication infrastructure.
Problem

Research questions and friction points this paper is trying to address.

Multiagent Reinforcement Learning
Scalability
Communication Constraints
Estimator Networks
Collective Influence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Collective Influence Estimation
Scalable Multiagent Reinforcement Learning
Communication-Efficient Coordination
Local Observation-Based Inference
Robotic Deployment
🔎 Similar Papers
No similar papers found.
Z
Zhenglong Luo
School of Engineering, The University of Newcastle, Callaghan, NSW 2308, Australia
Zhiyong Chen
Zhiyong Chen
Shanghai Jiao Tong University
6G networksWireless CommunicationsComputing and Caching Networks
A
Aoxiang Liu
School of Automation, Central South University, Changsha 410083, China
K
Ke Pan
School of Automation, Central South University, Changsha 410083, China