VidMuse: A Simple Video-to-Music Generation Framework with Long-Short-Term Modeling

📅 2024-06-06
🏛️ arXiv.org
📈 Citations: 16
Influential: 4
📄 PDF
🤖 AI Summary
This work addresses the cross-modal generation problem of synthesizing high-fidelity, acoustically and semantically aligned music from video content. Methodologically, we construct a large-scale 360K video–music paired dataset and propose an end-to-end diffusion-based architecture that jointly models short-term visual details and long-term narrative structure. Our model integrates a CLIP-based visual encoder, temporal self-attention, and multi-scale temporal convolutions to enable note-level music generation. The core contribution lies in the unified design of a cross-modal alignment mechanism and a fine-grained music generation paradigm. Quantitative evaluations demonstrate substantial improvements over state-of-the-art methods: a 32% reduction in Fréchet Inception Distance (FID) for audio quality, a 28% increase in Learned Perceptual Image Patch Similarity (LPIPS) for diversity, and a 41% improvement in Video–Music Alignment (VMA) score. To our knowledge, this is the first end-to-end, video-driven approach for high-fidelity music generation with explicit multimodal alignment.

Technology Category

Application Category

📝 Abstract
In this work, we systematically study music generation conditioned solely on the video. First, we present a large-scale dataset comprising 360K video-music pairs, including various genres such as movie trailers, advertisements, and documentaries. Furthermore, we propose VidMuse, a simple framework for generating music aligned with video inputs. VidMuse stands out by producing high-fidelity music that is both acoustically and semantically aligned with the video. By incorporating local and global visual cues, VidMuse enables the creation of musically coherent audio tracks that consistently match the video content through Long-Short-Term modeling. Through extensive experiments, VidMuse outperforms existing models in terms of audio quality, diversity, and audio-visual alignment. The code and datasets are available at https://vidmuse.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Generating music solely conditioned on video input
Ensuring acoustic and semantic alignment between music and video
Creating musically coherent audio tracks with Long-Short-Term modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates music solely from video inputs
Uses Long-Short-Term modeling for alignment
Incorporates local and global visual cues
🔎 Similar Papers
No similar papers found.