🤖 AI Summary
This work addresses the challenge of long-horizon geometric reasoning for rigid-body navigation through multiple consecutive narrow apertures, where early pose decisions critically influence subsequent reachability. The authors propose a geometry-aligned fine-tuning framework for large language models (LLMs) that generates fixed-length, machine-readable, and geometrically feasible waypoint sequences. Their approach integrates failure-driven LoRA-based supervised fine-tuning with Group Relative Policy Optimization (GRPO), a reinforcement learning method grounded in geometric validation. This is the first framework to incorporate structured failure feedback and explicit geometric feasibility checks into LLM training. Experiments demonstrate state-of-the-art success rates in both in-distribution and out-of-distribution scenarios, with the model actively selecting exit poses that facilitate subsequent passage through constrained openings.
📝 Abstract
We study rigid-body motion planning through multiple sequential narrow openings, which requires long-horizon geometric reasoning because the configuration used to traverse an early opening constrains the set of reachable configurations for subsequent ones. To achieve this, we propose a geometry-aligned large language model (LLM) fine-tuning framework that generates fixed-length, machine-readable waypoint sequences that are both geometrically feasible and coordinated across openings. Our approach uses a bi-level training pipeline. First, we perform failure-driven LoRA supervised fine-tuning (SFT) on human demonstrations, which incorporates structured failure feedback to teach the model common failure modes and enforce the output format. Second, we refine the same LoRA adapters using Group Relative Policy Optimization (GRPO) with geometric verification: each sampled waypoint sequence is densified by a model-based planner and scored with a deterministic geometry-derived reward to achieve continuous-motion feasibility. To validate the effectiveness of our proposed method, we provide both quantitative and qualitative results from simulations. Our method achieves the highest success rate in both in-distribution and out-of-distribution environments and qualitatively exhibits long-horizon geometric reasoning by selecting exit poses that facilitate entry into subsequent openings.