MMDisCo: Multi-Modal Discriminator-Guided Cooperative Diffusion for Joint Audio and Video Generation

📅 2024-05-28
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of high-cost, low-alignment multimodal generation for audio-video synthesis. To this end, we propose a lightweight audio-video co-generation framework that avoids end-to-end joint training. Instead, it leverages pretrained unimodal diffusion models and introduces a novel “Denoiser-as-Discriminator” mechanism: the optimal discriminator’s gradient directly steers cross-modal denoising, enabling efficient, alignment-aware generation without joint fine-tuning. The framework incorporates only a minimal set of learnable parameters—integrating gradient-guided conditioning and a lightweight cross-modal guidance module. Experiments demonstrate substantial improvements in temporal and semantic alignment across multiple benchmarks, while preserving high unimodal fidelity. Notably, the parameter count is reduced by over an order of magnitude compared to end-to-end multimodal diffusion models.

Technology Category

Application Category

📝 Abstract
This study aims to construct an audio-video generative model with minimal computational cost by leveraging pre-trained single-modal generative models for audio and video. To achieve this, we propose a novel method that guides single-modal models to cooperatively generate well-aligned samples across modalities. Specifically, given two pre-trained base diffusion models, we train a lightweight joint guidance module to adjust scores separately estimated by the base models to match the score of joint distribution over audio and video. We show that this guidance can be computed using the gradient of the optimal discriminator, which distinguishes real audio-video pairs from fake ones independently generated by the base models. Based on this analysis, we construct a joint guidance module by training this discriminator. Additionally, we adopt a loss function to stabilize the discriminator's gradient and make it work as a noise estimator, as in standard diffusion models. Empirical evaluations on several benchmark datasets demonstrate that our method improves both single-modal fidelity and multimodal alignment with relatively few parameters. The code is available at: https://github.com/SonyResearch/MMDisCo.
Problem

Research questions and friction points this paper is trying to address.

Minimizes computational cost
Enhances audio-video alignment
Leverages pre-trained models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Modal Discriminator-Guided Cooperative Diffusion
Lightweight Joint Guidance Module
Stabilized Discriminator Gradient as Noise Estimator
🔎 Similar Papers
No similar papers found.