🤖 AI Summary
Existing video-to-audio (V2A) generation methods rely on frame-level visual features, limiting their ability to model temporal semantics and achieve precise cross-modal alignment—resulting in semantic degradation and audio-video misalignment. To address this, we propose a language-model-driven multimodal diffusion framework: (1) a large language model (LLM) parses video semantics and generates structured textual prompts; (2) a text-modulated cross-modal feature fusion module enables fine-grained alignment of video, audio, and text representations in the latent space; and (3) an adaptive temporal modeling mechanism enhances temporal coherence of the generated audio. Evaluated on multiple benchmarks, our method achieves significant improvements in audio fidelity (+12.6% MOS) and semantic consistency (+28.3% CLAP Score), accelerates inference by 37%, and supports text-controllable and personalized audiovisual generation.
📝 Abstract
As artificial intelligence-generated content (AIGC) continues to evolve, video-to-audio (V2A) generation has emerged as a key area with promising applications in multimedia editing, augmented reality, and automated content creation. While Transformer and Diffusion models have advanced audio generation, a significant challenge persists in extracting precise semantic information from videos, as current models often lose sequential context by relying solely on frame-based features. To address this, we present TA-V2A, a method that integrates language, audio, and video features to improve semantic representation in latent space. By incorporating large language models for enhanced video comprehension, our approach leverages text guidance to enrich semantic expression. Our diffusion model-based system utilizes automated text modulation to enhance inference quality and efficiency, providing personalized control through text-guided interfaces. This integration enhances semantic expression while ensuring temporal alignment, leading to more accurate and coherent video-to-audio generation.