๐ค AI Summary
This work addresses a critical limitation in existing vision-and-language navigation methods, which rely on fixed geometric thresholds for topological graph sampling, often leading to oversampling in simple regions and undersampling in complex onesโresulting in computational redundancy and increased collision risk. To overcome this, the authors propose DGNav, a novel framework that employs scene-aware adaptive sampling to dynamically adjust graph density based on environmental complexity. DGNav further introduces a dynamic graph Transformer that integrates multimodal cues from vision, language, and geometry to refine graph connectivity and suppress topological noise. Evaluated on the R2R-CE and RxR-CE benchmarks, DGNav significantly outperforms current state-of-the-art approaches, achieving higher navigation accuracy and safety while maintaining strong exploration efficiency and generalization capability.
๐ Abstract
Vision-Language Navigation in Continuous Environments (VLN-CE) presents a core challenge: grounding high-level linguistic instructions into precise, safe, and long-horizon spatial actions. Explicit topological maps have proven to be a vital solution for providing robust spatial memory in such tasks. However, existing topological planning methods suffer from a"Granularity Rigidity"problem. Specifically, these methods typically rely on fixed geometric thresholds to sample nodes, which fails to adapt to varying environmental complexities. This rigidity leads to a critical mismatch: the model tends to over-sample in simple areas, causing computational redundancy, while under-sampling in high-uncertainty regions, increasing collision risks and compromising precision. To address this, we propose DGNav, a framework for Dynamic Topological Navigation, introducing a context-aware mechanism to modulate map density and connectivity on-the-fly. Our approach comprises two core innovations: (1) A Scene-Aware Adaptive Strategy that dynamically modulates graph construction thresholds based on the dispersion of predicted waypoints, enabling"densification on demand"in challenging environments; (2) A Dynamic Graph Transformer that reconstructs graph connectivity by fusing visual, linguistic, and geometric cues into dynamic edge weights, enabling the agent to filter out topological noise and enhancing instruction adherence. Extensive experiments on the R2R-CE and RxR-CE benchmarks demonstrate DGNav exhibits superior navigation performance and strong generalization capabilities. Furthermore, ablation studies confirm that our framework achieves an optimal trade-off between navigation efficiency and safe exploration. The code is available at https://github.com/shannanshouyin/DGNav.