🤖 AI Summary
Traditional 360° sound source localization regresses only discrete directions of arrival (DoAs), resulting in low spatial resolution and strong dependence on specific microphone array geometries. To address this, we propose a novel localization paradigm based on spherical semantic segmentation. Specifically, we formulate sound source localization as a pixel-wise binary segmentation task on beamformed audio maps, employing a U-Net architecture with frequency-domain input features. Spherical supervision masks are generated by synchronizing drone GPS coordinates with 360° video. To mitigate severe class imbalance, we adopt the Tversky loss and decode continuous DoAs via centroid-based post-processing. Crucially, our method is array-agnostic—requiring no prior knowledge of microphone geometry. Evaluated on real-world drone-recorded audio in open environments, it achieves significantly improved angular accuracy and cross-scene generalization, enabling robust, high-resolution distributed sound source localization.
📝 Abstract
We introduce a U-net model for 360° acoustic source localization formulated as a spherical semantic segmentation task. Rather than regressing discrete direction-of-arrival (DoA) angles, our model segments beamformed audio maps (azimuth and elevation) into regions of active sound presence. Using delay-and-sum (DAS) beamforming on a custom 24-microphone array, we generate signals aligned with drone GPS telemetry to create binary supervision masks. A modified U-Net, trained on frequency-domain representations of these maps, learns to identify spatially distributed source regions while addressing class imbalance via the Tversky loss. Because the network operates on beamformed energy maps, the approach is inherently array-independent and can adapt to different microphone configurations without retraining from scratch. The segmentation outputs are post-processed by computing centroids over activated regions, enabling robust DoA estimates. Our dataset includes real-world open-field recordings of a DJI Air 3 drone, synchronized with 360° video and flight logs across multiple dates and locations. Experimental results show that U-net generalizes across environments, providing improved angular precision, offering a new paradigm for dense spatial audio understanding beyond traditional Sound Source Localization (SSL).