๐ค AI Summary
Current vision-language models (VLMs) for Vision-and-Language Navigation (VLN) typically employ end-to-end, short-horizon, discrete action mapping, resulting in jerky motion, high response latency, and poor adaptability to dynamic obstacles and long-horizon planning. To address these limitations, we propose DualVLNโthe first dual-system VLN foundation model. Its System 2 (โslowโ) leverages a VLM for high-level semantic reasoning to generate mid-term waypoints, while its System 1 (โfastโ) employs a lightweight multimodal Diffusion Transformer that fuses pixel-level observations and latent states to produce smooth, real-time trajectories. This architecture uniquely decouples global path planning from local control, enabling millisecond-scale responsiveness without sacrificing generalization. Experiments demonstrate that DualVLN achieves state-of-the-art performance across all major VLN benchmarks and exhibits robust long-horizon planning and adaptive obstacle avoidance in realistic dynamic environments.
๐ Abstract
While recent large vision-language models (VLMs) have improved generalization in vision-language navigation (VLN), existing methods typically rely on end-to-end pipelines that map vision-language inputs directly to short-horizon discrete actions. Such designs often produce fragmented motions, incur high latency, and struggle with real-world challenges like dynamic obstacle avoidance. We propose DualVLN, the first dual-system VLN foundation model that synergistically integrates high-level reasoning with low-level action execution. System 2, a VLM-based global planner, "grounds slowly" by predicting mid-term waypoint goals via image-grounded reasoning. System 1, a lightweight, multi-modal conditioning Diffusion Transformer policy, "moves fast" by leveraging both explicit pixel goals and latent features from System 2 to generate smooth and accurate trajectories. The dual-system design enables robust real-time control and adaptive local decision-making in complex, dynamic environments. By decoupling training, the VLM retains its generalization, while System 1 achieves interpretable and effective local navigation. DualVLN outperforms prior methods across all VLN benchmarks and real-world experiments demonstrate robust long-horizon planning and real-time adaptability in dynamic environments.