Audio Spatially-Guided Fusion for Audio-Visual Navigation

📅 2026-04-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited generalization capability of embodied agents under environmental and acoustic variations by proposing a robust audio-visual autonomous navigation method. The approach introduces an audio spatial feature encoder to extract target-relevant spatial states and incorporates an audio intensity attention mechanism along with a spatial state-guided multimodal fusion strategy. This enables dynamic cross-modal alignment and adaptive feature integration, effectively mitigating noise induced by perceptual uncertainty. Experimental results on the Replica and Matterport3D datasets demonstrate that the proposed method significantly outperforms existing approaches in unseen sound tasks, achieving substantial improvements in cross-scene generalization performance.
📝 Abstract
Audio-visual Navigation refers to an agent utilizing visual and auditory information in complex 3D environments to accomplish target localization and path planning, thereby achieving autonomous navigation. The core challenge of this task lies in the following: how the agent can break free from the dependence on training data and achieve autonomous navigation with good generalization performance when facing changes in environments and sound sources. To address this challenge, we propose an Audio Spatially-Guided Fusion for Audio-Visual Navigation method. First, we design an audio spatial feature encoder, which adaptively extracts target-related spatial state information through an audio intensity attention mechanism; based on this, we introduce an Audio Spatial State Guided Fusion (ASGF) to achieve dynamic alignment and adaptive fusion of multimodal features, effectively alleviating noise interference caused by perceptual uncertainty. Experimental results on the Replica and Matterport3D datasets indicate that our method is particularly effective on unheard tasks, demonstrating improved generalization under unknown sound source distributions.
Problem

Research questions and friction points this paper is trying to address.

audio-visual navigation
generalization
environmental changes
sound source variation
autonomous navigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

audio-visual navigation
spatial attention
multimodal fusion
generalization
audio spatial encoding
🔎 Similar Papers
No similar papers found.
X
Xinyu Zhou
Joint Research Laboratory for Embodied Intelligence, Xinjiang University; Joint International Research Laboratory of Silk Road Multilingual Cognitive Computing, Xinjiang University; School of Computer Science and Technology, Xinjiang University, Urumqi 830017, China
Yinfeng Yu
Yinfeng Yu
Associate Professor, Xinjiang University
Embodied intelligence