Learning What To Hear: Boosting Sound-Source Association For Robust Audiovisual Instance Segmentation

๐Ÿ“… 2025-09-25
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing audio-visual instance segmentation methods suffer from visual bias: unified additive fusion weakens query specificity toward sound sources, while purely visual supervision causes queries to converge on arbitrarily salient objects rather than actual sound emitters. To address this, we propose an audio-centric query generation mechanism that employs cross-attention for sound-source-selective modeling. Additionally, we introduce a sound-aware ordinal counting loss, jointly incorporating ordinal regression and monotonic consistency constraints to explicitly encode the prior knowledge of the number of sounding objectsโ€”thereby mitigating vision-dominant bias. Our approach achieves improvements of +1.64 mAP, +0.6 HOTA, and +2.06 FSLA on the AVISeg benchmark, demonstrating the effectiveness of sound-source-driven query specialization and explicit counting supervision for distinguishing and localizing multiple concurrent sound sources.

Technology Category

Application Category

๐Ÿ“ Abstract
Audiovisual instance segmentation (AVIS) requires accurately localizing and tracking sounding objects throughout video sequences. Existing methods suffer from visual bias stemming from two fundamental issues: uniform additive fusion prevents queries from specializing to different sound sources, while visual-only training objectives allow queries to converge to arbitrary salient objects. We propose Audio-Centric Query Generation using cross-attention, enabling each query to selectively attend to distinct sound sources and carry sound-specific priors into visual decoding. Additionally, we introduce Sound-Aware Ordinal Counting (SAOC) loss that explicitly supervises sounding object numbers through ordinal regression with monotonic consistency constraints, preventing visual-only convergence during training. Experiments on AVISeg benchmark demonstrate consistent improvements: +1.64 mAP, +0.6 HOTA, and +2.06 FSLA, validating that query specialization and explicit counting supervision are crucial for accurate audiovisual instance segmentation.
Problem

Research questions and friction points this paper is trying to address.

Addresses visual bias in audiovisual instance segmentation methods
Enables queries to specialize for distinct sound sources
Introduces explicit supervision for sounding object counting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Audio-Centric Query Generation using cross-attention
Sound-Aware Ordinal Counting loss with monotonic constraints
Query specialization and explicit counting supervision
๐Ÿ”Ž Similar Papers
No similar papers found.