LoVA: Long-form Video-to-Audio Generation

📅 2024-09-23
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video-to-audio (V2A) methods suffer from poor temporal coherence and weak semantic alignment in generating audio for long videos (>10 seconds), particularly due to the limited capacity of UNet-based diffusion models to capture long-range cross-modal dependencies. This work formally defines and addresses, for the first time, the task of generating semantically aligned audio from long videos. We propose LoVA—a novel diffusion-based model built upon the Diffusion Transformer (DiT) architecture—integrating cross-modal conditional diffusion with long-sequence spatiotemporal feature encoding to fundamentally enhance long-horizon modeling and audio-video semantic consistency. Experiments demonstrate that LoVA matches state-of-the-art performance on the 10-second benchmark while consistently outperforming all baselines on longer-video benchmarks, achieving significant improvements in both objective metrics (e.g., FAD, KL divergence) and subjective evaluation scores.

Technology Category

Application Category

📝 Abstract
Video-to-audio (V2A) generation is important for video editing and post-processing, enabling the creation of semantics-aligned audio for silent video. However, most existing methods focus on generating short-form audio for short video segment (less than 10 seconds), while giving little attention to the scenario of long-form video inputs. For current UNet-based diffusion V2A models, an inevitable problem when handling long-form audio generation is the inconsistencies within the final concatenated audio. In this paper, we first highlight the importance of long-form V2A problem. Besides, we propose LoVA, a novel model for Long-form Video-to-Audio generation. Based on the Diffusion Transformer (DiT) architecture, LoVA proves to be more effective at generating long-form audio compared to existing autoregressive models and UNet-based diffusion models. Extensive objective and subjective experiments demonstrate that LoVA achieves comparable performance on 10-second V2A benchmark and outperforms all other baselines on a benchmark with long-form video input.
Problem

Research questions and friction points this paper is trying to address.

Video-to-Audio Conversion
Coherence
Long Sequence Generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LoVA Model
Diffusion Transformer
Long Video to Audio Conversion
🔎 Similar Papers
No similar papers found.
X
Xin Cheng
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing 100872, China
Xihua Wang
Xihua Wang
Renmin University of China
Y
Yihan Wu
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing 100872, China
Y
Yuyue Wang
Gaoling School of Artificial Intelligence, Renmin University of China, Beijing 100872, China
Ruihua Song
Ruihua Song
Renmin University of China
AI based creationmulti-modaltiy chitchatnatural language understandinginformation retrievalinformation extraction