Seeing Soundscapes: Audio-Visual Generation and Separation from Soundscapes Using Audio-Visual Separator

πŸ“… 2025-04-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitation of existing audio-visual generation models, which only support single-category audio input and struggle with real-world polyphonic soundscapes. We introduce two novel tasks: *soundscape-driven joint audio-visual generation* and *audio-driven fine-grained audio-visual separation*. To this end, we propose the Audio-Visual Separator (AVS) architecture, which integrates contrastive learning with class-decoupled representation learning to jointly generate scene-level images and disentangle category-specific visual outputs from mixed audio inputs. The model is trained end-to-end on the VGGSound dataset. We further propose two new evaluation metrics: Class Response Separation (CRS) and an improved Recall@K (R@K) for intra-class retrieval. Experiments demonstrate a 7% improvement in CRS and a 4% gain in R@2*, with generated images exhibiting significantly enhanced category fidelity and visual plausibility over state-of-the-art methods.

Technology Category

Application Category

πŸ“ Abstract
Recent audio-visual generative models have made substantial progress in generating images from audio. However, existing approaches focus on generating images from single-class audio and fail to generate images from mixed audio. To address this, we propose an Audio-Visual Generation and Separation model (AV-GAS) for generating images from soundscapes (mixed audio containing multiple classes). Our contribution is threefold: First, we propose a new challenge in the audio-visual generation task, which is to generate an image given a multi-class audio input, and we propose a method that solves this task using an audio-visual separator. Second, we introduce a new audio-visual separation task, which involves generating separate images for each class present in a mixed audio input. Lastly, we propose new evaluation metrics for the audio-visual generation task: Class Representation Score (CRS) and a modified R@K. Our model is trained and evaluated on the VGGSound dataset. We show that our method outperforms the state-of-the-art, achieving 7% higher CRS and 4% higher R@2* in generating plausible images with mixed audio.
Problem

Research questions and friction points this paper is trying to address.

Generating images from mixed-class audio inputs
Separating mixed audio into distinct visual components
Evaluating audio-visual generation with new metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates images from multi-class mixed audio
Uses audio-visual separator for soundscapes
Introduces new evaluation metrics CRS
πŸ”Ž Similar Papers
No similar papers found.
Minjae Kang
Minjae Kang
Seoul National University
Reinforcement LearningRobotic Manipulation
M
Martim Brandao
King’s College London, United Kingdom