🤖 AI Summary
Audio-visual segmentation (AVS) suffers from inadequate audio representation learning and weak audio-visual semantic alignment. Method: We propose CL-CLDM—the first conditional latent diffusion model for AVS integrated with contrastive learning, where audio serves as an explicit condition to guide the generation of sound-source segmentation masks. Specifically: (1) We pioneer the incorporation of contrastive learning into the conditional diffusion framework, explicitly enhancing audio’s driving role in segmentation via cross-modal positive/negative sample discrimination; (2) We introduce a novel audio-visual correspondence modeling paradigm, equivalent to maximizing mutual information between segmentation masks and audio representations. Results: CL-CLDM achieves significant improvements on mainstream benchmarks, notably +3.2% mIoU, demonstrating that audio can be effectively modeled to enable semantically consistent segmentation. Code and models are publicly available.
📝 Abstract
We propose a latent diffusion model with contrastive learning for audio-visual segmentation (AVS) to extensively explore the contribution of audio. We interpret AVS as a conditional generation task, where audio is defined as the conditional variable for sound producer(s) segmentation. With our new interpretation, it is especially necessary to model the correlation between audio and the final segmentation map to ensure its contribution. We introduce a latent diffusion model to our framework to achieve semantic-correlated representation learning. Specifically, our diffusion model learns the conditional generation process of the ground-truth segmentation map, leading to ground-truth aware inference when we perform the denoising process at the test stage. As a conditional diffusion model, we argue it is essential to ensure that the conditional variable contributes to model output. We then introduce contrastive learning to our framework to learn audio-visual correspondence, which is proven consistent with maximizing the mutual information between model prediction and the audio data. In this way, our latent diffusion model via contrastive learning explicitly maximizes the contribution of audio for AVS. Experimental results on the benchmark dataset verify the effectiveness of our solution. Code and results are online via our project page: https://github.com/OpenNLPLab/DiffusionAVS.