🤖 AI Summary
Addressing two key challenges in audio-visual segmentation (AVS)—audio aliasing causing feature ambiguity and multi-source acoustic emissions from a single object hindering audio-visual correspondence—this paper proposes a dynamic audio-visual disentanglement framework. Methodologically, it introduces: (1) a semantic-enhanced audio source separation module to reconstruct clean acoustic representations; (2) discriminative cross-modal learning with a dynamic interaction scoring mechanism for fine-grained audio-visual alignment; and (3) a non-matching audio adaptive suppression module to mitigate off-screen and interfering sound sources. Built upon an end-to-end trainable deep multimodal interaction architecture, the model achieves state-of-the-art performance on benchmark AVS datasets—including VoxCeleb2 and YouTube-AV—with significant mIoU improvements. Notably, it demonstrates superior robustness in complex scenarios involving overlapping speech and off-screen audio.
📝 Abstract
Sound-guided object segmentation has drawn considerable attention for its potential to enhance multimodal perception. Previous methods primarily focus on developing advanced architectures to facilitate effective audio-visual interactions, without fully addressing the inherent challenges posed by audio natures, emph{ie}, (1) feature confusion due to the overlapping nature of audio signals, and (2) audio-visual matching difficulty from the varied sounds produced by the same object. To address these challenges, we propose Dynamic Derivation and Elimination (DDESeg): a novel audio-visual segmentation framework. Specifically, to mitigate feature confusion, DDESeg reconstructs the semantic content of the mixed audio signal by enriching the distinct semantic information of each individual source, deriving representations that preserve the unique characteristics of each sound. To reduce the matching difficulty, we introduce a discriminative feature learning module, which enhances the semantic distinctiveness of generated audio representations. Considering that not all derived audio representations directly correspond to visual features (e.g., off-screen sounds), we propose a dynamic elimination module to filter out non-matching elements. This module facilitates targeted interaction between sounding regions and relevant audio semantics. By scoring the interacted features, we identify and filter out irrelevant audio information, ensuring accurate audio-visual alignment. Comprehensive experiments demonstrate that our framework achieves superior performance in AVS datasets.