🤖 AI Summary
This work addresses the challenging problem of localizing and classifying invisible 3D acoustic sources—such as gas leaks or mechanical failures—using weakly correlated multimodal data. We propose a novel cross-modal modeling framework that explicitly incorporates physical surface priors. Our method employs synchronized RGB-D video and co-planar four-channel microphone array audio captured from multiple viewpoints, formulating the task as a set prediction problem. Key innovations include: (1) generating initial acoustic source candidates via single-view audio processing, and (2) iteratively refining their 3D positions and semantic classes using multi-view RGB-D geometric constraints on scene surfaces. Crucially, we are the first to embed an explicit physical surface model into both cross-modal feature alignment and beamforming, substantially improving robustness against RGB-D measurement noise and environmental acoustic interference. Evaluated on a large-scale synthetic dataset, our approach achieves significant performance gains over state-of-the-art baselines in both localization accuracy and classification precision.
📝 Abstract
Accurately localizing 3D sound sources and estimating their semantic labels -- where the sources may not be visible, but are assumed to lie on the physical surface of objects in the scene -- have many real applications, including detecting gas leak and machinery malfunction. The audio-visual weak-correlation in such setting poses new challenges in deriving innovative methods to answer if or how we can use cross-modal information to solve the task. Towards this end, we propose to use an acoustic-camera rig consisting of a pinhole RGB-D camera and a coplanar four-channel microphone array~(Mic-Array). By using this rig to record audio-visual signals from multiviews, we can use the cross-modal cues to estimate the sound sources 3D locations. Specifically, our framework SoundLoc3D treats the task as a set prediction problem, each element in the set corresponds to a potential sound source. Given the audio-visual weak-correlation, the set representation is initially learned from a single view microphone array signal, and then refined by actively incorporating physical surface cues revealed from multiview RGB-D images. We demonstrate the efficiency and superiority of SoundLoc3D on large-scale simulated dataset, and further show its robustness to RGB-D measurement inaccuracy and ambient noise interference.