Video-Guided Text-to-Music Generation Using Public Domain Movie Collections

📅 2025-06-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address audiovisual emotional misalignment in film music generation, this work introduces OSSL—the first open-source audiovisual co-occurring dataset for film music, comprising 36.5 hours of public-domain films with high-fidelity original soundtracks and fine-grained human-annotated emotion labels—thereby filling a critical gap in multimodal film music training data. We propose a lightweight video adapter that extracts ViT-based visual features and aligns them temporally via projection, enabling zero-shot adaptation of existing text-to-music models (e.g., MusicGen-Medium) without fine-tuning. Furthermore, we formulate a cross-modal generation framework jointly optimizing distribution fidelity (FID), semantic alignment (CLAP-Score), and subjective emotion/genre consistency. Experiments demonstrate significant improvements: +23.6% emotion consistency, +18.4% genre alignment, and −31.2% FID reduction. The dataset, code, and models are publicly released.

Technology Category

Application Category

📝 Abstract
Despite recent advancements in music generation systems, their application in film production remains limited, as they struggle to capture the nuances of real-world filmmaking, where filmmakers consider multiple factors-such as visual content, dialogue, and emotional tone-when selecting or composing music for a scene. This limitation primarily stems from the absence of comprehensive datasets that integrate these elements. To address this gap, we introduce Open Screen Sound Library (OSSL), a dataset consisting of movie clips from public domain films, totaling approximately 36.5 hours, paired with high-quality soundtracks and human-annotated mood information. To demonstrate the effectiveness of our dataset in improving the performance of pre-trained models on film music generation tasks, we introduce a new video adapter that enhances an autoregressive transformer-based text-to-music model by adding video-based conditioning. Our experimental results demonstrate that our proposed approach effectively enhances MusicGen-Medium in terms of both objective measures of distributional and paired fidelity, and subjective compatibility in mood and genre. The dataset and code are available at https://havenpersona.github.io/ossl-v1.
Problem

Research questions and friction points this paper is trying to address.

Lack of datasets integrating visuals, dialogue, and emotion for film music
Difficulty in generating music matching scene-specific cinematic nuances
Need for video-conditioned models to improve text-to-music generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Public domain movie clips dataset with soundtracks
Video adapter for text-to-music transformer model
Enhanced music generation with visual and mood conditioning
🔎 Similar Papers
No similar papers found.