๐ค AI Summary
Existing audio-visual instance segmentation methods suffer from visual bias: unified additive fusion weakens query specificity toward sound sources, while purely visual supervision causes queries to converge on arbitrarily salient objects rather than actual sound emitters. To address this, we propose an audio-centric query generation mechanism that employs cross-attention for sound-source-selective modeling. Additionally, we introduce a sound-aware ordinal counting loss, jointly incorporating ordinal regression and monotonic consistency constraints to explicitly encode the prior knowledge of the number of sounding objectsโthereby mitigating vision-dominant bias. Our approach achieves improvements of +1.64 mAP, +0.6 HOTA, and +2.06 FSLA on the AVISeg benchmark, demonstrating the effectiveness of sound-source-driven query specialization and explicit counting supervision for distinguishing and localizing multiple concurrent sound sources.
๐ Abstract
Audiovisual instance segmentation (AVIS) requires accurately localizing and tracking sounding objects throughout video sequences. Existing methods suffer from visual bias stemming from two fundamental issues: uniform additive fusion prevents queries from specializing to different sound sources, while visual-only training objectives allow queries to converge to arbitrary salient objects. We propose Audio-Centric Query Generation using cross-attention, enabling each query to selectively attend to distinct sound sources and carry sound-specific priors into visual decoding. Additionally, we introduce Sound-Aware Ordinal Counting (SAOC) loss that explicitly supervises sounding object numbers through ordinal regression with monotonic consistency constraints, preventing visual-only convergence during training. Experiments on AVISeg benchmark demonstrate consistent improvements: +1.64 mAP, +0.6 HOTA, and +2.06 FSLA, validating that query specialization and explicit counting supervision are crucial for accurate audiovisual instance segmentation.