🤖 AI Summary
This paper addresses the problem of audio-to-image generation without requiring paired audio-visual data or pretraining visual generative models. The proposed method establishes a tri-modal audio–text–vision alignment framework: leveraging frozen pretrained language and vision-language models, it trains only lightweight adapters to enable semantic audio-to-text mapping and context-aware image-text alignment. Inspired by cognitive neuroscience, a dual-path alignment mechanism is introduced, converting low-level audio features (e.g., loudness, pitch) into interpretable, programmable text prompts—enabling fine-grained, controllable image synthesis. The approach achieves state-of-the-art performance on multiple standard benchmarks under both zero-shot and supervised settings, significantly outperforming existing methods in controllable audio-to-image generation.
📝 Abstract
We introduce SeeingSounds, a lightweight and modular framework for audio-to-image generation that leverages the interplay between audio, language, and vision-without requiring any paired audio-visual data or training on visual generative models. Rather than treating audio as a substitute for text or relying solely on audio-to-text mappings, our method performs dual alignment: audio is projected into a semantic language space via a frozen language encoder, and, contextually grounded into the visual domain using a vision-language model. This approach, inspired by cognitive neuroscience, reflects the natural cross-modal associations observed in human perception. The model operates on frozen diffusion backbones and trains only lightweight adapters, enabling efficient and scalable learning. Moreover, it supports fine-grained and interpretable control through procedural text prompt generation, where audio transformations (e.g., volume or pitch shifts) translate into descriptive prompts (e.g., "a distant thunder") that guide visual outputs. Extensive experiments across standard benchmarks confirm that SeeingSounds outperforms existing methods in both zero-shot and supervised settings, establishing a new state of the art in controllable audio-to-visual generation.