🤖 AI Summary
This paper addresses the problem of synthesizing environmental audio from arbitrary viewpoints without requiring prior knowledge of sound sources or room geometry. The method leverages only sparsely recorded raw audio and corresponding panoramic RGB-D visual data. Its core contributions are threefold: (1) a novel vision-acoustic binding module that jointly models local visual features and acoustic propagation characteristics; (2) a microphone-layout-adaptive optimization mechanism coupled with a viewpoint-aware multi-reference-point weighted fusion strategy; and (3) panoramic RGB-D-driven visual embedding learning to implicitly encode acoustic transfer functions. Evaluated on public benchmarks and real-world scenes, the approach significantly outperforms existing methods in terms of spatial consistency and audio fidelity. It exhibits strong generalization across unseen room layouts, microphone configurations, and acoustic environments—requiring no explicit room modeling or source priors.
📝 Abstract
We introduce SoundVista, a method to generate the ambient sound of an arbitrary scene at novel viewpoints. Given a pre-acquired recording of the scene from sparsely distributed microphones, SoundVista can synthesize the sound of that scene from an unseen target viewpoint. The method learns the underlying acoustic transfer function that relates the signals acquired at the distributed microphones to the signal at the target viewpoint, using a limited number of known recordings. Unlike existing works, our method does not require constraints or prior knowledge of sound source details. Moreover, our method efficiently adapts to diverse room layouts, reference microphone configurations and unseen environments. To enable this, we introduce a visual-acoustic binding module that learns visual embeddings linked with local acoustic properties from panoramic RGB and depth data. We first leverage these embeddings to optimize the placement of reference microphones in any given scene. During synthesis, we leverage multiple embeddings extracted from reference locations to get adaptive weights for their contribution, conditioned on target viewpoint. We benchmark the task on both publicly available data and real-world settings. We demonstrate significant improvements over existing methods.