FiLM-Nav: Efficient and Generalizable Navigation via VLM Fine-tuning

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of localizing objects described via free-form natural language in embodied navigation. Methodologically, it introduces a novel paradigm that directly fine-tunes pre-trained vision-language models (VLMs) as end-to-end navigation policies—bypassing conventional zero-shot transfer or explicit map construction. The approach employs multi-task mixed-supervised fine-tuning across ObjectNav, OVON, ImageNav, and spatial reasoning tasks, using raw visual trajectories and natural language goals as inputs to enhance generalization to unseen object categories. Its core contribution is the first direct adaptation of VLMs as embodied navigation policies without auxiliary modules or additional annotations. Evaluated on HM3D ObjectNav and HM3D-OVON benchmarks, the method achieves state-of-the-art performance among open-vocabulary approaches, significantly improving success rate and SPL. These results demonstrate the effective transfer of large-scale vision-language knowledge to embodied decision-making.

Technology Category

Application Category

📝 Abstract
Enabling robotic assistants to navigate complex environments and locate objects described in free-form language is a critical capability for real-world deployment. While foundation models, particularly Vision-Language Models (VLMs), offer powerful semantic understanding, effectively adapting their web-scale knowledge for embodied decision-making remains a key challenge. We present FiLM-Nav (Fine-tuned Language Model for Navigation), an approach that directly fine-tunes pre-trained VLM as the navigation policy. In contrast to methods that use foundation models primarily in a zero-shot manner or for map annotation, FiLM-Nav learns to select the next best exploration frontier by conditioning directly on raw visual trajectory history and the navigation goal. Leveraging targeted simulated embodied experience allows the VLM to ground its powerful pre-trained representations in the specific dynamics and visual patterns relevant to goal-driven navigation. Critically, fine-tuning on a diverse data mixture combining ObjectNav, OVON, ImageNav, and an auxiliary spatial reasoning task proves essential for achieving robustness and broad generalization. FiLM-Nav sets a new state-of-the-art in both SPL and success rate on HM3D ObjectNav among open-vocabulary methods, and sets a state-of-the-art SPL on the challenging HM3D-OVON benchmark, demonstrating strong generalization to unseen object categories. Our work validates that directly fine-tuning VLMs on diverse simulated embodied data is a highly effective pathway towards generalizable and efficient semantic navigation capabilities.
Problem

Research questions and friction points this paper is trying to address.

Adapting web-scale VLM knowledge for embodied robotic navigation decision-making
Enabling robots to locate objects using free-form language instructions
Achieving robust generalization across diverse navigation scenarios and environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tunes pre-trained VLM as navigation policy
Learns next action from visual history and goal
Uses diverse simulated data for broad generalization
🔎 Similar Papers
No similar papers found.