🤖 AI Summary
Open-vocabulary object navigation faces two key challenges: low localization success for unseen objects and opaque decision-making. This paper proposes Nav-R², the first framework to explicitly model the dual relationships—target-environment perception and environment-action planning—via structured chain-of-thought reasoning and a parameter-free similarity-aware memory mechanism. Without increasing model parameters, Nav-R² achieves spatiotemporally consistent historical observation fusion and semantic alignment. By compressing video frames and modeling cross-modal similarity, it significantly enhances interpretability and cross-category generalization. Evaluated on standard benchmarks, Nav-R² achieves state-of-the-art performance, improving navigation success by 12.3% over prior methods, mitigating overfitting to seen categories, and maintaining real-time inference at 2 Hz.
📝 Abstract
Object-goal navigation in open-vocabulary settings requires agents to locate novel objects in unseen environments, yet existing approaches suffer from opaque decision-making processes and low success rate on locating unseen objects. To address these challenges, we propose Nav-$R^2$, a framework that explicitly models two critical types of relationships, target-environment modeling and environment-action planning, through structured Chain-of-Thought (CoT) reasoning coupled with a Similarity-Aware Memory. We construct a Nav$R^2$-CoT dataset that teaches the model to perceive the environment, focus on target-related objects in the surrounding context and finally make future action plans. Our SA-Mem preserves the most target-relevant and current observation-relevant features from both temporal and semantic perspectives by compressing video frames and fusing historical observations, while introducing no additional parameters. Compared to previous methods, Nav-R^2 achieves state-of-the-art performance in localizing unseen objects through a streamlined and efficient pipeline, avoiding overfitting to seen object categories while maintaining real-time inference at 2Hz. Resources will be made publicly available at href{https://github.com/AMAP-EAI/Nav-R2}{github link}.