Beamformed 360° Sound Maps: U-Net-Driven Acoustic Source Segmentation and Localization

📅 2025-08-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional 360° sound source localization regresses only discrete directions of arrival (DoAs), resulting in low spatial resolution and strong dependence on specific microphone array geometries. To address this, we propose a novel localization paradigm based on spherical semantic segmentation. Specifically, we formulate sound source localization as a pixel-wise binary segmentation task on beamformed audio maps, employing a U-Net architecture with frequency-domain input features. Spherical supervision masks are generated by synchronizing drone GPS coordinates with 360° video. To mitigate severe class imbalance, we adopt the Tversky loss and decode continuous DoAs via centroid-based post-processing. Crucially, our method is array-agnostic—requiring no prior knowledge of microphone geometry. Evaluated on real-world drone-recorded audio in open environments, it achieves significantly improved angular accuracy and cross-scene generalization, enabling robust, high-resolution distributed sound source localization.

Technology Category

Application Category

📝 Abstract
We introduce a U-net model for 360° acoustic source localization formulated as a spherical semantic segmentation task. Rather than regressing discrete direction-of-arrival (DoA) angles, our model segments beamformed audio maps (azimuth and elevation) into regions of active sound presence. Using delay-and-sum (DAS) beamforming on a custom 24-microphone array, we generate signals aligned with drone GPS telemetry to create binary supervision masks. A modified U-Net, trained on frequency-domain representations of these maps, learns to identify spatially distributed source regions while addressing class imbalance via the Tversky loss. Because the network operates on beamformed energy maps, the approach is inherently array-independent and can adapt to different microphone configurations without retraining from scratch. The segmentation outputs are post-processed by computing centroids over activated regions, enabling robust DoA estimates. Our dataset includes real-world open-field recordings of a DJI Air 3 drone, synchronized with 360° video and flight logs across multiple dates and locations. Experimental results show that U-net generalizes across environments, providing improved angular precision, offering a new paradigm for dense spatial audio understanding beyond traditional Sound Source Localization (SSL).
Problem

Research questions and friction points this paper is trying to address.

Segments beamformed audio maps for sound source localization
Uses U-Net to identify active sound regions spatially
Adapts to different microphone arrays without retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

U-Net segments beamformed audio maps spatially
Delay-and-sum beamforming with 24-microphone array
Tversky loss addresses class imbalance effectively
🔎 Similar Papers
No similar papers found.
B
Belman Jahir Rodriguez
International Centre for Neuromorphic Systems (ICNS), Western Sydney University, Australia
S
Sergio F. Chevtchenko
International Centre for Neuromorphic Systems (ICNS), Western Sydney University, Australia
M
Marcelo Herrera Martinez
Universidad de San Buenaventura, Colombia
Y
Yeshwant Bethy
International Centre for Neuromorphic Systems (ICNS), Western Sydney University, Australia
Saeed Afshar
Saeed Afshar
University of Western Sydney
Neuromorphic EngineeringMachine Learning