Synchronized Video-to-Audio Generation via Mel Quantization-Continuum Decomposition

📅 2025-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of generating high-fidelity, temporally synchronized audio from silent video. We propose Mel-QCD, a novel paradigm that for the first time disentangles mel-spectrograms into three orthogonal signal components—quantized, continuous, and semantic—and models each separately. To achieve fine-grained, video-conditioned audio generation, we design V2X, a video-driven multi-signal predictor integrating ControlNet-based conditional control with textual inversion. The entire framework is trained end-to-end via diffusion model fine-tuning. Our method achieves state-of-the-art performance across eight comprehensive metrics—including audio quality, temporal synchronization, and semantic consistency—significantly outperforming existing video-to-audio approaches. By explicitly factorizing spectrogram representation and enabling interpretable, controllable cross-modal generation, Mel-QCD establishes a new, principled framework for multimodal synthesis.

Technology Category

Application Category

📝 Abstract
Video-to-audio generation is essential for synthesizing realistic audio tracks that synchronize effectively with silent videos. Following the perspective of extracting essential signals from videos that can precisely control the mature text-to-audio generative diffusion models, this paper presents how to balance the representation of mel-spectrograms in terms of completeness and complexity through a new approach called Mel Quantization-Continuum Decomposition (Mel-QCD). We decompose the mel-spectrogram into three distinct types of signals, employing quantization or continuity to them, we can effectively predict them from video by a devised video-to-all (V2X) predictor. Then, the predicted signals are recomposed and fed into a ControlNet, along with a textual inversion design, to control the audio generation process. Our proposed Mel-QCD method demonstrates state-of-the-art performance across eight metrics, evaluating dimensions such as quality, synchronization, and semantic consistency. Our codes and demos will be released at href{Website}{https://wjc2830.github.io/MelQCD/}.
Problem

Research questions and friction points this paper is trying to address.

Synthesize synchronized audio for silent videos
Balance mel-spectrogram completeness and complexity
Control audio generation using video and text inputs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mel Quantization-Continuum Decomposition for audio generation
Video-to-all predictor for signal prediction
ControlNet with textual inversion for audio control
🔎 Similar Papers
No similar papers found.