π€ AI Summary
To address the reliance on manual annotations and challenges in cross-modal temporal alignment in automatic video background music generation, this paper proposes TIVM, a self-supervised cross-modal audio-video matching framework. TIVM employs a dual-stream Transformer encoder to independently model the temporal structures of audio and video modalities, and leverages InfoNCE contrastive learning to achieve fine-grained cross-modal alignment within a shared embedding spaceβwithout requiring any human-annotated supervision. Crucially, it pioneers the integration of Transformers for joint audio-video representation learning, substantially enhancing temporal semantic consistency. Extensive experiments on multiple benchmark datasets demonstrate that TIVM outperforms state-of-the-art methods by a significant 12.6% improvement in Recall@10, validating its effectiveness and generalizability for unsupervised cross-modal matching.
π Abstract
A fitting soundtrack can help a video better convey its content and provide a better immersive experience. This paper introduces a novel approach utilizing self-supervised learning and contrastive learning to automatically recommend audio for video content, thereby eliminating the need for manual labeling. We use a dual-branch cross-modal embedding model that maps both audio and video features into a common low-dimensional space. The fit of various audio-video pairs can then be mod-eled as inverse distance measure. In addition, a comparative analysis of various temporal encoding methods is presented, emphasizing the effectiveness of transformers in managing the temporal information of audio-video matching tasks. Through multiple experiments, we demonstrate that our model TIVM, which integrates transformer encoders and using InfoN Celoss, significantly improves the performance of audio-video matching and surpasses traditional methods.