A Simple but Strong Baseline for Sounding Video Generation: Effective Adaptation of Audio and Video Diffusion Models for Joint Generation

📅 2024-09-26
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of synchronous audio-video generation and weak cross-modal alignment in diffusion-based generative models. To this end, we propose a strong baseline framework for joint audio-video generation. Our method integrates pre-trained audio and video diffusion models while eliminating inefficient cross-attention mechanisms. Instead, we introduce two novel components: a timestep-aware module and cross-modal conditional positional encoding (CMC-PE), which explicitly encode temporal alignment priors. Furthermore, we employ multimodal feature fusion and end-to-end joint training. Extensive experiments demonstrate that our approach achieves state-of-the-art performance across multiple benchmarks, significantly outperforming existing methods in three key dimensions—generation quality, temporal consistency, and cross-modal alignment. These results validate the effectiveness and generalizability of our designed inductive biases.

Technology Category

Application Category

📝 Abstract
In this work, we build a simple but strong baseline for sounding video generation. Given base diffusion models for audio and video, we integrate them with additional modules into a single model and train it to make the model jointly generate audio and video. To enhance alignment between audio-video pairs, we introduce two novel mechanisms in our model. The first one is timestep adjustment, which provides different timestep information to each base model. It is designed to align how samples are generated along with timesteps across modalities. The second one is a new design of the additional modules, termed Cross-Modal Conditioning as Positional Encoding (CMC-PE). In CMC-PE, cross-modal information is embedded as if it represents temporal position information, and the embeddings are fed into the model like positional encoding. Compared with the popular cross-attention mechanism, CMC-PE provides a better inductive bias for temporal alignment in the generated data. Experimental results validate the effectiveness of the two newly introduced mechanisms and also demonstrate that our method outperforms existing methods.
Problem

Research questions and friction points this paper is trying to address.

Integrate audio and video diffusion models for joint generation
Enhance alignment between audio-video pairs with novel mechanisms
Improve temporal alignment in generated data using CMC-PE
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates audio and video diffusion models jointly
Introduces timestep adjustment for modality alignment
Uses CMC-PE for cross-modal temporal encoding
🔎 Similar Papers
No similar papers found.