NaVILA: Legged Robot Vision-Language-Action Model for Navigation

📅 2024-12-05
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the end-to-end mapping challenge from natural-language instructions to low-level joint actuation in legged robot vision-language navigation. We propose a two-stage decoupled architecture: an upper stage employs a multimodal large model (ViT-LLM) to generate spatially grounded mid-level language actions (e.g., “move forward 75 cm”), while a lower stage executes fine-grained motor control via a vision-based reinforcement learning policy implemented in IsaacLab. Our method integrates joint vision–language–action modeling with simulation-to-real co-training, enabling deployment on real quadrupedal robots. Evaluated on both existing benchmarks and a newly constructed high-fidelity IsaacLab simulation benchmark, our approach significantly outperforms state-of-the-art methods. It achieves, for the first time, closed-loop, end-to-end navigation from natural-language instructions to real-world quadrupedal robot execution—establishing a scalable, language-driven motor control paradigm for embodied intelligence.

Technology Category

Application Category

📝 Abstract
This paper proposes to solve the problem of Vision-and-Language Navigation with legged robots, which not only provides a flexible way for humans to command but also allows the robot to navigate through more challenging and cluttered scenes. However, it is non-trivial to translate human language instructions all the way to low-level leg joint actions. We propose NaVILA, a 2-level framework that unifies a Vision-Language-Action model (VLA) with locomotion skills. Instead of directly predicting low-level actions from VLA, NaVILA first generates mid-level actions with spatial information in the form of language, (e.g.,"moving forward 75cm"), which serves as an input for a visual locomotion RL policy for execution. NaVILA substantially improves previous approaches on existing benchmarks. The same advantages are demonstrated in our newly developed benchmarks with IsaacLab, featuring more realistic scenes, low-level controls, and real-world robot experiments. We show more results at https://navila-bot.github.io/
Problem

Research questions and friction points this paper is trying to address.

Vision-and-Language Navigation with legged robots
Translating human language to leg joint actions
Improving navigation in challenging, cluttered scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

2-level Vision-Language-Action framework
Mid-level language-guided actions
Visual locomotion RL policy
🔎 Similar Papers
No similar papers found.