🤖 AI Summary
Video-to-music generation faces two key challenges: incomplete video representation learning and insufficient beat synchronization accuracy. To address these, we propose a hierarchical video parsing framework coupled with a storyboard-guided cross-modal coordination mechanism. First, we design a frame-level transition–beat alignment module that dynamically synchronizes visual transitions with musical beats—a novel contribution. Second, we construct a high-fidelity video–music paired dataset. Third, building upon a latent music diffusion model, we integrate modality-specific encoders, an SG-CAtt cross-attention mechanism, positional and duration-aware embeddings, and a TB-As adapter to ensure both semantic coherence and temporal precision. Experiments on our curated e-commerce advertisement and short-video datasets demonstrate substantial improvements in semantic relevance and beat synchronization accuracy, outperforming state-of-the-art methods on scene-matching and rhythm-alignment metrics.
📝 Abstract
Video-to-Music generation seeks to generate musically appropriate background music that enhances audiovisual immersion for videos. However, current approaches suffer from two critical limitations: 1) incomplete representation of video details, leading to weak alignment, and 2) inadequate temporal and rhythmic correspondence, particularly in achieving precise beat synchronization. To address the challenges, we propose Video Echoed in Music (VeM), a latent music diffusion that generates high-quality soundtracks with semantic, temporal, and rhythmic alignment for input videos. To capture video details comprehensively, VeM employs a hierarchical video parsing that acts as a music conductor, orchestrating multi-level information across modalities. Modality-specific encoders, coupled with a storyboard-guided cross-attention mechanism (SG-CAtt), integrate semantic cues while maintaining temporal coherence through position and duration encoding. For rhythmic precision, the frame-level transition-beat aligner and adapter (TB-As) dynamically synchronize visual scene transitions with music beats. We further contribute a novel video-music paired dataset sourced from e-commerce advertisements and video-sharing platforms, which imposes stricter transition-beat synchronization requirements. Meanwhile, we introduce novel metrics tailored to the task. Experimental results demonstrate superiority, particularly in semantic relevance and rhythmic precision.