🤖 AI Summary
This work addresses the challenge of real-time audio-visual instance segmentation in continuous video streams, where existing methods—largely offline—struggle to associate sounding instances dynamically and distinguish between sounding and silent object states. To this end, we propose SeaVIS, the first online audio-visual instance segmentation framework, which introduces a Causal Cross-Attention Fusion (CCAF) module and an Audio-Guided Contrastive Learning (AGCL) strategy. Operating under strict causal constraints, SeaVIS enables effective fusion of audio and visual features and facilitates real-time tracking of sounding instances. The approach significantly suppresses interference from silent objects and enhances sound-following capability. Evaluated on the AVISeg dataset, SeaVIS outperforms state-of-the-art methods across multiple metrics while maintaining inference speed suitable for real-time applications.
📝 Abstract
Recently, an audio-visual instance segmentation (AVIS) task has been introduced, aiming to identify, segment and track individual sounding instances in videos. However, prevailing methods primarily adopt the offline paradigm, that cannot associate detected instances across consecutive clips, making them unsuitable for real-world scenarios that involve continuous video streams. To address this limitation, we introduce SeaVIS, the first online framework designed for audio-visual instance segmentation. SeaVIS leverages the Causal Cross Attention Fusion (CCAF) module to enable efficient online processing, which integrates visual features from the current frame with the entire audio history under strict causal constraints. A major challenge for conventional VIS methods is that appearance-based instance association fails to distinguish between an object's sounding and silent states, resulting in the incorrect segmentation of silent objects. To tackle this, we employ an Audio-Guided Contrastive Learning (AGCL) strategy to generate instance prototypes that encode not only visual appearance but also sounding activity. In this way, instances preserved during per-frame prediction that do not emit sound can be effectively suppressed during instance association process, thereby significantly enhancing the audio-following capability of SeaVIS. Extensive experiments conducted on the AVISeg dataset demonstrate that SeaVIS surpasses existing state-of-the-art models across multiple evaluation metrics while maintaining a competitive inference speed suitable for real-time processing.