SeeingSounds: Learning Audio-to-Visual Alignment via Text

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the problem of audio-to-image generation without requiring paired audio-visual data or pretraining visual generative models. The proposed method establishes a tri-modal audio–text–vision alignment framework: leveraging frozen pretrained language and vision-language models, it trains only lightweight adapters to enable semantic audio-to-text mapping and context-aware image-text alignment. Inspired by cognitive neuroscience, a dual-path alignment mechanism is introduced, converting low-level audio features (e.g., loudness, pitch) into interpretable, programmable text prompts—enabling fine-grained, controllable image synthesis. The approach achieves state-of-the-art performance on multiple standard benchmarks under both zero-shot and supervised settings, significantly outperforming existing methods in controllable audio-to-image generation.

Technology Category

Application Category

📝 Abstract
We introduce SeeingSounds, a lightweight and modular framework for audio-to-image generation that leverages the interplay between audio, language, and vision-without requiring any paired audio-visual data or training on visual generative models. Rather than treating audio as a substitute for text or relying solely on audio-to-text mappings, our method performs dual alignment: audio is projected into a semantic language space via a frozen language encoder, and, contextually grounded into the visual domain using a vision-language model. This approach, inspired by cognitive neuroscience, reflects the natural cross-modal associations observed in human perception. The model operates on frozen diffusion backbones and trains only lightweight adapters, enabling efficient and scalable learning. Moreover, it supports fine-grained and interpretable control through procedural text prompt generation, where audio transformations (e.g., volume or pitch shifts) translate into descriptive prompts (e.g., "a distant thunder") that guide visual outputs. Extensive experiments across standard benchmarks confirm that SeeingSounds outperforms existing methods in both zero-shot and supervised settings, establishing a new state of the art in controllable audio-to-visual generation.
Problem

Research questions and friction points this paper is trying to address.

Generating images from audio without paired training data
Aligning audio with visual content using language mediation
Enabling controllable audio-to-visual generation through text prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Audio projected into semantic language space
Uses frozen diffusion backbones with lightweight adapters
Generates descriptive prompts from audio transformations
🔎 Similar Papers
No similar papers found.