🤖 AI Summary
This work addresses the limited generalization capability of embodied agents under environmental and acoustic variations by proposing a robust audio-visual autonomous navigation method. The approach introduces an audio spatial feature encoder to extract target-relevant spatial states and incorporates an audio intensity attention mechanism along with a spatial state-guided multimodal fusion strategy. This enables dynamic cross-modal alignment and adaptive feature integration, effectively mitigating noise induced by perceptual uncertainty. Experimental results on the Replica and Matterport3D datasets demonstrate that the proposed method significantly outperforms existing approaches in unseen sound tasks, achieving substantial improvements in cross-scene generalization performance.
📝 Abstract
Audio-visual Navigation refers to an agent utilizing visual and auditory information in complex 3D environments to accomplish target localization and path planning, thereby achieving autonomous navigation. The core challenge of this task lies in the following: how the agent can break free from the dependence on training data and achieve autonomous navigation with good generalization performance when facing changes in environments and sound sources. To address this challenge, we propose an Audio Spatially-Guided Fusion for Audio-Visual Navigation method. First, we design an audio spatial feature encoder, which adaptively extracts target-related spatial state information through an audio intensity attention mechanism; based on this, we introduce an Audio Spatial State Guided Fusion (ASGF) to achieve dynamic alignment and adaptive fusion of multimodal features, effectively alleviating noise interference caused by perceptual uncertainty. Experimental results on the Replica and Matterport3D datasets indicate that our method is particularly effective on unheard tasks, demonstrating improved generalization under unknown sound source distributions.