Sat2Sound: A Unified Framework for Zero-Shot Soundscape Mapping

πŸ“… 2025-05-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing methods rely on paired satellite imagery and geotagged audio, limiting their ability to model the diversity of sound sources and constraining them to supervised learning paradigms. This work introduces the first zero-shot cross-modal soundscape modeling framework: it constructs a shared discrete soundscape codebook and integrates vision-language models with multimodal contrastive learning to enable fine-grained sound description generation from satellite images and location-driven sound synthesis. The approach eliminates dependence on paired training data and supports cross-modal mapping in fully unsupervised scenarios. On GeoSound and SoundingEarth benchmarks, it achieves state-of-the-art performance in sound–image cross-modal retrieval. Notably, it is the first method to enable audible, immersive sound synthesis for arbitrary global locations. This establishes a novel paradigm for environmental acoustics perception and virtual geographic auditory reconstruction.

Technology Category

Application Category

πŸ“ Abstract
We present Sat2Sound, a multimodal representation learning framework for soundscape mapping, designed to predict the distribution of sounds at any location on Earth. Existing methods for this task rely on satellite image and paired geotagged audio samples, which often fail to capture the diversity of sound sources at a given location. To address this limitation, we enhance existing datasets by leveraging a Vision-Language Model (VLM) to generate semantically rich soundscape descriptions for locations depicted in satellite images. Our approach incorporates contrastive learning across audio, audio captions, satellite images, and satellite image captions. We hypothesize that there is a fixed set of soundscape concepts shared across modalities. To this end, we learn a shared codebook of soundscape concepts and represent each sample as a weighted average of these concepts. Sat2Sound achieves state-of-the-art performance in cross-modal retrieval between satellite image and audio on two datasets: GeoSound and SoundingEarth. Additionally, building on Sat2Sound's ability to retrieve detailed soundscape captions, we introduce a novel application: location-based soundscape synthesis, which enables immersive acoustic experiences. Our code and models will be publicly available.
Problem

Research questions and friction points this paper is trying to address.

Predict sound distribution globally without paired audio samples
Enhance datasets using Vision-Language Model for soundscape descriptions
Learn shared soundscape concepts across multiple modalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Vision-Language Model for soundscape descriptions
Learns shared codebook of soundscape concepts
Enables location-based soundscape synthesis
πŸ”Ž Similar Papers
No similar papers found.