🤖 AI Summary
This work addresses the multi-dimensional modeling of cross-modal semantic alignment and temporal synchronization between videos and music, focusing on rhythm synchronization, affective matching, thematic consistency, and cultural association. To this end, we introduce HarmonySet—a large-scale, high-quality dataset comprising 48,328 video–music pairs—and propose the first human-in-the-loop, multi-dimensional semantic alignment annotation framework. We further establish the first comprehensive evaluation benchmark tailored to this task. Methodologically, our approach integrates audio-based rhythm detection, fine-grained affective computation, cross-modal alignment modeling, and a novel multi-dimensional evaluation metric suite. Experimental results demonstrate that our framework significantly enhances model capability across all four alignment dimensions. This work provides a new data foundation, a novel annotation paradigm, and a standardized evaluation protocol for joint video–music understanding.
📝 Abstract
This paper introduces HarmonySet, a comprehensive dataset designed to advance video-music understanding. HarmonySet consists of 48,328 diverse video-music pairs, annotated with detailed information on rhythmic synchronization, emotional alignment, thematic coherence, and cultural relevance. We propose a multi-step human-machine collaborative framework for efficient annotation, combining human insights with machine-generated descriptions to identify key transitions and assess alignment across multiple dimensions. Additionally, we introduce a novel evaluation framework with tasks and metrics to assess the multi-dimensional alignment of video and music, including rhythm, emotion, theme, and cultural context. Our extensive experiments demonstrate that HarmonySet, along with the proposed evaluation framework, significantly improves the ability of multimodal models to capture and analyze the intricate relationships between video and music.