Collision- and Reachability-Aware Multi-Robot Control with Grounded LLM Planners

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multi-robot collaborative control, large language models (LLMs) often generate invalid action plans violating collision avoidance and reachability constraints. To address this, we propose RLVR—a reinforcement learning framework integrating verifiable physical rewards—embedding physically grounded, constraint-aware rewards directly into the LLM’s reasoning process for end-to-end constraint-aware planning. RLVR achieves “physical grounding” on small-scale models (e.g., Qwen2.5-3B-Instruct and Qwen3-4B) without relying on large-model parameters or external verification modules. Evaluated in BoxNet and BoxNet3D simulation environments, our grounded small models achieve significantly higher success rates than ungrounded state-of-the-art LLMs (e.g., GPT-4o-mini), demonstrating superior constraint satisfaction. This work establishes a lightweight, efficient, and formally verifiable paradigm for multi-robot control, marking the first integration of verifiable physical constraints into LLM-based planning.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated strong performance in various robot control tasks. However, their deployment in real-world applications remains constrained. Even state-ofthe-art LLMs, such as GPT-o4mini, frequently produce invalid action plans that violate physical constraints, such as directing a robot to an unreachable location or causing collisions between robots. This issue primarily arises from a lack of awareness of these physical constraints during the reasoning process. To address this issue, we propose a novel framework that integrates reinforcement learning with verifiable rewards (RLVR) to incentivize knowledge of physical constraints into LLMs to induce constraints-aware reasoning during plan generation. In this approach, only valid action plans that successfully complete a control task receive positive rewards. We applied our method to two small-scale LLMs: a non-reasoning Qwen2.5-3B-Instruct and a reasoning Qwen3-4B. The experiment results demonstrate that constraint-aware small LLMs largely outperform large-scale models without constraints, grounded on both the BoxNet task and a newly developed BoxNet3D environment built using MuJoCo. This work highlights the effectiveness of grounding even small LLMs with physical constraints to enable scalable and efficient multi-robot control in complex, physically constrained environments.
Problem

Research questions and friction points this paper is trying to address.

LLMs often violate physical constraints in robot control
Lack of awareness of constraints during reasoning causes invalid plans
Propose RLVR framework to integrate constraint-aware reasoning in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates reinforcement learning with verifiable rewards
Incentivizes physical constraints awareness in LLMs
Grounds small LLMs for scalable robot control
🔎 Similar Papers
No similar papers found.