Learning to Highlight Audio by Watching Movies

📅 2025-05-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address audiovisual perceptual misalignment caused by incongruence between visual saliency and acoustic prominence in videos, this paper introduces the novel task of “vision-guided acoustic highlighting,” which leverages cinematic visuals to guide audio enhancement for improved audiovisual consistency. Methodologically: (1) we construct the first unsupervised, real-world “muddy mix” movie audio dataset; (2) we propose a separation–adjustment–re-mixing paradigm for pseudo-label generation; and (3) we design a Transformer-based multimodal framework integrating audiovisual feature alignment and cross-modal attention. Experiments demonstrate that our approach significantly outperforms existing baselines in both quantitative metrics and subjective evaluation. Ablation studies systematically validate the impact of contextual guidance types and data difficulty on model performance. This work establishes a foundation for vision-conditioned audio enhancement and advances research on coherent multimodal perception in cinematic content.

Technology Category

Application Category

📝 Abstract
Recent years have seen a significant increase in video content creation and consumption. Crafting engaging content requires the careful curation of both visual and audio elements. While visual cue curation, through techniques like optimal viewpoint selection or post-editing, has been central to media production, its natural counterpart, audio, has not undergone equivalent advancements. This often results in a disconnect between visual and acoustic saliency. To bridge this gap, we introduce a novel task: visually-guided acoustic highlighting, which aims to transform audio to deliver appropriate highlighting effects guided by the accompanying video, ultimately creating a more harmonious audio-visual experience. We propose a flexible, transformer-based multimodal framework to solve this task. To train our model, we also introduce a new dataset -- the muddy mix dataset, leveraging the meticulous audio and video crafting found in movies, which provides a form of free supervision. We develop a pseudo-data generation process to simulate poorly mixed audio, mimicking real-world scenarios through a three-step process -- separation, adjustment, and remixing. Our approach consistently outperforms several baselines in both quantitative and subjective evaluation. We also systematically study the impact of different types of contextual guidance and difficulty levels of the dataset. Our project page is here: https://wikichao.github.io/VisAH/.
Problem

Research questions and friction points this paper is trying to address.

Bridging visual-acoustic saliency disconnect in media
Transforming audio for harmonious video-guided highlighting
Developing multimodal framework for audio-visual synchronization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based multimodal framework for audio highlighting
Muddy mix dataset from movies for free supervision
Pseudo-data generation simulates real-world audio scenarios
🔎 Similar Papers
2024-07-18IEEE Workshop/Winter Conference on Applications of Computer VisionCitations: 0