Multi-Agent LLM Actor-Critic Framework for Social Robot Navigation

📅 2025-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Zero-shot adaptation for multi-robot social navigation (SAN) in human-dense environments remains challenging due to bottlenecks in existing centralized large language model (LLM) frameworks—namely, single-point decision dependency, lack of verification mechanisms, and inconsistent mapping between macro-level intentions and micro-level actions. Method: We propose the first LLM-driven decentralized Actor-Critic architecture: each robot employs a personalized LLM-based Actor to generate low-level control commands; a dual-layer Critic—comprising global and individual modules—jointly validates actions, augmented by an entropy-weighted score fusion mechanism enabling self-verification and dynamic re-query. Contribution/Results: Our approach decouples high-level intent from low-level control, balancing individual autonomy with collective social compliance. Experiments across diverse multi-robot scenarios demonstrate significant improvements in social acceptability, robustness, and coordination, outperforming state-of-the-art centralized LLM-based navigation methods.

Technology Category

Application Category

📝 Abstract
Recent advances in robotics and large language models (LLMs) have sparked growing interest in human-robot collaboration and embodied intelligence. To enable the broader deployment of robots in human-populated environments, socially-aware robot navigation (SAN) has become a key research area. While deep reinforcement learning approaches that integrate human-robot interaction (HRI) with path planning have demonstrated strong benchmark performance, they often struggle to adapt to new scenarios and environments. LLMs offer a promising avenue for zero-shot navigation through commonsense inference. However, most existing LLM-based frameworks rely on centralized decision-making, lack robust verification mechanisms, and face inconsistencies in translating macro-actions into precise low-level control signals. To address these challenges, we propose SAMALM, a decentralized multi-agent LLM actor-critic framework for multi-robot social navigation. In this framework, a set of parallel LLM actors, each reflecting distinct robot personalities or configurations, directly generate control signals. These actions undergo a two-tier verification process via a global critic that evaluates group-level behaviors and individual critics that assess each robot's context. An entropy-based score fusion mechanism further enhances self-verification and re-query, improving both robustness and coordination. Experimental results confirm that SAMALM effectively balances local autonomy with global oversight, yielding socially compliant behaviors and strong adaptability across diverse multi-robot scenarios. More details and videos about this work are available at: https://sites.google.com/view/SAMALM.
Problem

Research questions and friction points this paper is trying to address.

Enhance robot navigation in human-populated environments.
Improve adaptability of robots in new scenarios.
Ensure robust and coordinated multi-robot social navigation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized multi-agent LLM actor-critic framework
Two-tier verification process for robust control
Entropy-based score fusion for enhanced coordination
🔎 Similar Papers
No similar papers found.