Contrastive Conditional Latent Diffusion for Audio-visual Segmentation

📅 2023-07-31
🏛️ arXiv.org
📈 Citations: 29
Influential: 1
📄 PDF
🤖 AI Summary
Audio-visual segmentation (AVS) suffers from inadequate audio representation learning and weak audio-visual semantic alignment. Method: We propose CL-CLDM—the first conditional latent diffusion model for AVS integrated with contrastive learning, where audio serves as an explicit condition to guide the generation of sound-source segmentation masks. Specifically: (1) We pioneer the incorporation of contrastive learning into the conditional diffusion framework, explicitly enhancing audio’s driving role in segmentation via cross-modal positive/negative sample discrimination; (2) We introduce a novel audio-visual correspondence modeling paradigm, equivalent to maximizing mutual information between segmentation masks and audio representations. Results: CL-CLDM achieves significant improvements on mainstream benchmarks, notably +3.2% mIoU, demonstrating that audio can be effectively modeled to enable semantically consistent segmentation. Code and models are publicly available.
📝 Abstract
We propose a latent diffusion model with contrastive learning for audio-visual segmentation (AVS) to extensively explore the contribution of audio. We interpret AVS as a conditional generation task, where audio is defined as the conditional variable for sound producer(s) segmentation. With our new interpretation, it is especially necessary to model the correlation between audio and the final segmentation map to ensure its contribution. We introduce a latent diffusion model to our framework to achieve semantic-correlated representation learning. Specifically, our diffusion model learns the conditional generation process of the ground-truth segmentation map, leading to ground-truth aware inference when we perform the denoising process at the test stage. As a conditional diffusion model, we argue it is essential to ensure that the conditional variable contributes to model output. We then introduce contrastive learning to our framework to learn audio-visual correspondence, which is proven consistent with maximizing the mutual information between model prediction and the audio data. In this way, our latent diffusion model via contrastive learning explicitly maximizes the contribution of audio for AVS. Experimental results on the benchmark dataset verify the effectiveness of our solution. Code and results are online via our project page: https://github.com/OpenNLPLab/DiffusionAVS.
Problem

Research questions and friction points this paper is trying to address.

Modeling audio-visual correlation for segmentation
Enhancing audio contribution via contrastive learning
Optimizing density ratio for multimodal data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contrastive conditional latent diffusion model
Density ratio optimization for audio contribution
Ground-truth aware inference in denoising
🔎 Similar Papers
No similar papers found.
Y
Yuxin Mao
School of Electronics and Information, Northwestern Polytechnical University, Xi’an, China
J
Jing Zhang
School of Computing, Australian National University, Canberra, Australia
Mochu Xiang
Mochu Xiang
Northwestern Polytechnical University
Monocular Depth Estimation
Y
Yun-Qiu Lv
School of Electronics and Information, Northwestern Polytechnical University, Xi’an, China
Yiran Zhong
Yiran Zhong
PhD, Australian National University
LLMSelf-supervised LearningVisual Geometry LearningNatural Language ProcessingMultimodal
Y
Yuchao Dai
School of Electronics and Information, Northwestern Polytechnical University, Xi’an, China