π€ AI Summary
This work addresses the limitation of existing audio-visual generation models, which only support single-category audio input and struggle with real-world polyphonic soundscapes. We introduce two novel tasks: *soundscape-driven joint audio-visual generation* and *audio-driven fine-grained audio-visual separation*. To this end, we propose the Audio-Visual Separator (AVS) architecture, which integrates contrastive learning with class-decoupled representation learning to jointly generate scene-level images and disentangle category-specific visual outputs from mixed audio inputs. The model is trained end-to-end on the VGGSound dataset. We further propose two new evaluation metrics: Class Response Separation (CRS) and an improved Recall@K (R@K) for intra-class retrieval. Experiments demonstrate a 7% improvement in CRS and a 4% gain in R@2*, with generated images exhibiting significantly enhanced category fidelity and visual plausibility over state-of-the-art methods.
π Abstract
Recent audio-visual generative models have made substantial progress in generating images from audio. However, existing approaches focus on generating images from single-class audio and fail to generate images from mixed audio. To address this, we propose an Audio-Visual Generation and Separation model (AV-GAS) for generating images from soundscapes (mixed audio containing multiple classes). Our contribution is threefold: First, we propose a new challenge in the audio-visual generation task, which is to generate an image given a multi-class audio input, and we propose a method that solves this task using an audio-visual separator. Second, we introduce a new audio-visual separation task, which involves generating separate images for each class present in a mixed audio input. Lastly, we propose new evaluation metrics for the audio-visual generation task: Class Representation Score (CRS) and a modified R@K. Our model is trained and evaluated on the VGGSound dataset. We show that our method outperforms the state-of-the-art, achieving 7% higher CRS and 4% higher R@2* in generating plausible images with mixed audio.