MG-Nav: Dual-Scale Visual Navigation via Sparse Spatial Memory

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of jointly achieving long-range path planning and precise target approach in zero-shot visual navigation, this paper proposes a global-local collaborative framework. It constructs a region-centered sparse Spatial Memory Graph (SMG) to jointly encode environmental geometry and object semantics; designs an image-to-instance hybrid retrieval mechanism enabling cross-view, target-conditioned path planning; and introduces a VGGT-adapter geometric module coupled with a frequency-aware dual-layer control architecture to support adaptive switching between point-goal and image-goal navigation modes. Evaluated on the HM3D and MP3D instance-image-goal benchmarks, our method achieves state-of-the-art zero-shot performance. It demonstrates strong robustness to dynamic scene rearrangements and unseen environments, significantly improving navigation success rates in complex, real-world settings.

Technology Category

Application Category

📝 Abstract
We present MG-Nav (Memory-Guided Navigation), a dual-scale framework for zero-shot visual navigation that unifies global memory-guided planning with local geometry-enhanced control. At its core is the Sparse Spatial Memory Graph (SMG), a compact, region-centric memory where each node aggregates multi-view keyframe and object semantics, capturing both appearance and spatial structure while preserving viewpoint diversity. At the global level, the agent is localized on SMG and a goal-conditioned node path is planned via an image-to-instance hybrid retrieval, producing a sequence of reachable waypoints for long-horizon guidance. At the local level, a navigation foundation policy executes these waypoints in point-goal mode with obstacle-aware control, and switches to image-goal mode when navigating from the final node towards the visual target. To further enhance viewpoint alignment and goal recognition, we introduce VGGT-adapter, a lightweight geometric module built on the pre-trained VGGT model, which aligns observation and goal features in a shared 3D-aware space. MG-Nav operates global planning and local control at different frequencies, using periodic re-localization to correct errors. Experiments on HM3D Instance-Image-Goal and MP3D Image-Goal benchmarks demonstrate that MG-Nav achieves state-of-the-art zero-shot performance and remains robust under dynamic rearrangements and unseen scene conditions.
Problem

Research questions and friction points this paper is trying to address.

Develops a dual-scale framework for zero-shot visual navigation
Integrates global memory-guided planning with local geometry-enhanced control
Enhances viewpoint alignment and goal recognition in unseen scenes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-scale framework unifies global memory planning with local geometry control
Sparse Spatial Memory Graph aggregates multi-view semantics for compact representation
VGGT-adapter aligns observation-goal features in 3D-aware space
🔎 Similar Papers
No similar papers found.
B
Bo Wang
The University of Hong Kong
Jiehong Lin
Jiehong Lin
South China University of Technology
Computer VisionMachine Learning
C
Chenzhi Liu
The University of Hong Kong
Xinting Hu
Xinting Hu
Max Planck Institute for Informatics
Multimodal ReasoningContinual LearningSemi-Supervised Learning
Y
Yifei Yu
The University of Hong Kong
T
Tianjia Liu
The University of Hong Kong
Z
Zhongrui Wang
Southern University of Science and Technology
Xiaojuan Qi
Xiaojuan Qi
Assistant Professor, The University of Hong Kong
3D VisionDeep learningArtificial IntelligenceMedical Image Analysis