🤖 AI Summary
Music-to-MV description generation suffers from a significant cross-modal semantic gap. To address this, we systematically construct the first music-driven MV description dataset by integrating multi-source data (e.g., Music4All). We propose an end-to-end, interpretable cross-modal alignment training paradigm: jointly extracting rhythm-, emotion-, and timbre-related acoustic features via OpenSMILE and CLAP; fine-tuning multimodal vision-language models (e.g., Flamingo, Qwen-VL); and introducing a cross-modal attention mechanism to enforce audio–text temporal alignment. Experiments on our curated test set demonstrate a 23.6% improvement in BLEU-4 score over baselines and achieve a human-evaluated relevance score of 4.2/5.0. This work is the first to empirically validate that structured musical features—such as beat patterns, harmonic progressions, and affective contours—can effectively drive the generation of semantically rich and temporally coherent MV descriptions.
📝 Abstract
Music-to-music-video generation is a challenging task due to the intrinsic differences between the music and video modalities. The advent of powerful text-to-video diffusion models has opened a promising pathway for music-video (MV) generation by first addressing the music-to-MV description task and subsequently leveraging these models for video generation. In this study, we focus on the MV description generation task and propose a comprehensive pipeline encompassing training data construction and multimodal model fine-tuning. We fine-tune existing pre-trained multimodal models on our newly constructed music-to-MV description dataset based on the Music4All dataset, which integrates both musical and visual information. Our experimental results demonstrate that music representations can be effectively mapped to textual domains, enabling the generation of meaningful MV description directly from music inputs. We also identify key components in the dataset construction pipeline that critically impact the quality of MV description and highlight specific musical attributes that warrant greater focus for improved MV description generation.