🤖 AI Summary
Existing vision-and-language navigation methods suffer from limited generalization in dynamic real-world environments due to their reliance on static scene assumptions. This work proposes integrating a dynamic geometric foundation model into the navigation framework, enabling explicit 3D spatial modeling and vision-language alignment through cross-branch feature fusion. Furthermore, we introduce a pose-invariant, adaptive-resolution token pruning mechanism that effectively compresses redundant information across long temporal sequences. By incorporating dynamic geometric priors for the first time, our approach significantly enhances spatial understanding, improving both robustness and computational efficiency in dynamic settings. Experimental results demonstrate state-of-the-art performance across multiple benchmarks, achieving high navigation accuracy with low inference overhead.
📝 Abstract
Vision-language Navigation (VLN) requires an agent to understand visual observations and language instructions to navigate in unseen environments. Most existing approaches rely on static scene assumptions and struggle to generalize in dynamic, real-world scenarios. To address this challenge, we propose DyGeoVLN, a dynamic geometry-aware VLN framework. Our method infuses a dynamic geometry foundation model into the VLN framework through cross-branch feature fusion to enable explicit 3D spatial representation and visual-semantic reasoning. To efficiently compress historical token information in long-horizon, dynamic navigation, we further introduce a novel pose-free and adaptive-resolution token-pruning strategy. This strategy can remove spatio-temporal redundant tokens to reduce inference cost. Extensive experiments demonstrate that our approach achieves state-of-the-art performance on multiple benchmarks and exhibits strong robustness in real-world environments.