Video Object Segmentation-Aware Audio Generation

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal audio generation models lack object-level controllability in professional Foley applications, failing to synthesize precise sound effects conditioned on specific objects in video and often introducing irrelevant background noise or spatially misaligned responses. To address this, we propose a novel task—segmentation-aware audio generation—and introduce SAGANet, the first model jointly encoding visual segmentation masks, video frame sequences, and textual descriptions to enable object-level sound source localization and conditional synthesis. To support this task, we construct Segmented Music Solos, the first publicly available dataset of solo instrumental performances with pixel-accurate segmentation annotations. Quantitative and qualitative evaluations demonstrate that our approach significantly outperforms state-of-the-art methods in both audio fidelity and vision–audio object alignment accuracy, establishing a new benchmark for controllable Foley generation.

Technology Category

Application Category

📝 Abstract
Existing multimodal audio generation models often lack precise user control, which limits their applicability in professional Foley workflows. In particular, these models focus on the entire video and do not provide precise methods for prioritizing a specific object within a scene, generating unnecessary background sounds, or focusing on the wrong objects. To address this gap, we introduce the novel task of video object segmentation-aware audio generation, which explicitly conditions sound synthesis on object-level segmentation maps. We present SAGANet, a new multimodal generative model that enables controllable audio generation by leveraging visual segmentation masks along with video and textual cues. Our model provides users with fine-grained and visually localized control over audio generation. To support this task and further research on segmentation-aware Foley, we propose Segmented Music Solos, a benchmark dataset of musical instrument performance videos with segmentation information. Our method demonstrates substantial improvements over current state-of-the-art methods and sets a new standard for controllable, high-fidelity Foley synthesis. Code, samples, and Segmented Music Solos are available at https://saganet.notion.site
Problem

Research questions and friction points this paper is trying to address.

Generating audio with precise object-level control in videos
Eliminating unnecessary background sounds in multimodal synthesis
Providing visually localized audio generation for Foley workflows
Innovation

Methods, ideas, or system contributions that make the work stand out.

Object-level segmentation maps condition sound synthesis
SAGANet uses visual masks with video and text cues
Segmented Music Solos dataset enables segmentation-aware Foley
🔎 Similar Papers
No similar papers found.
I
Ilpo Viertola
Tampere University, Tampere, Finland
V
Vladimir Iashin
University of Oxford, Oxford, UK
Esa Rahtu
Esa Rahtu
Professor, Tampere University, Finland
Computer VisionImage UnderstandingMachine Learning