🤖 AI Summary
In multi-robot collaborative control, large language models (LLMs) often generate invalid action plans violating collision avoidance and reachability constraints. To address this, we propose RLVR—a reinforcement learning framework integrating verifiable physical rewards—embedding physically grounded, constraint-aware rewards directly into the LLM’s reasoning process for end-to-end constraint-aware planning. RLVR achieves “physical grounding” on small-scale models (e.g., Qwen2.5-3B-Instruct and Qwen3-4B) without relying on large-model parameters or external verification modules. Evaluated in BoxNet and BoxNet3D simulation environments, our grounded small models achieve significantly higher success rates than ungrounded state-of-the-art LLMs (e.g., GPT-4o-mini), demonstrating superior constraint satisfaction. This work establishes a lightweight, efficient, and formally verifiable paradigm for multi-robot control, marking the first integration of verifiable physical constraints into LLM-based planning.
📝 Abstract
Large language models (LLMs) have demonstrated strong performance in various robot control tasks. However, their deployment in real-world applications remains constrained. Even state-ofthe-art LLMs, such as GPT-o4mini, frequently produce invalid action plans that violate physical constraints, such as directing a robot to an unreachable location or causing collisions between robots. This issue primarily arises from a lack of awareness of these physical constraints during the reasoning process. To address this issue, we propose a novel framework that integrates reinforcement learning with verifiable rewards (RLVR) to incentivize knowledge of physical constraints into LLMs to induce constraints-aware reasoning during plan generation. In this approach, only valid action plans that successfully complete a control task receive positive rewards. We applied our method to two small-scale LLMs: a non-reasoning Qwen2.5-3B-Instruct and a reasoning Qwen3-4B. The experiment results demonstrate that constraint-aware small LLMs largely outperform large-scale models without constraints, grounded on both the BoxNet task and a newly developed BoxNet3D environment built using MuJoCo. This work highlights the effectiveness of grounding even small LLMs with physical constraints to enable scalable and efficient multi-robot control in complex, physically constrained environments.