🤖 AI Summary
This work addresses the challenge of multi-source motion understanding and reasoning in dynamic spatial audio—specifically, jointly detecting overlapping sound events, estimating their direction-of-arrival (DoA) and distance, and answering complex semantic queries about source motion. To this end, we propose the first end-to-end framework integrating spatial audio encoding with semantic alignment: a spatial audio encoder extracts geometric features; a cross-modal attention mechanism grounds audio representations to spatial attributes; and a large language model enables structured, multimodal reasoning. We further introduce the first benchmark for mobile-source spatial audio understanding, featuring multi-event scenes, diverse motion trajectories, and logic-based question-answering. On this benchmark, our method achieves state-of-the-art performance across all tasks: accurate multi-event detection, high-precision localization (mean DoA error < 5.2°, distance error < 0.32 m), and robust semantic reasoning in dynamic scenarios.
📝 Abstract
Spatial audio reasoning enables machines to interpret auditory scenes by understanding events and their spatial attributes. In this work, we focus on spatial audio understanding with an emphasis on reasoning about moving sources. First, we introduce a spatial audio encoder that processes spatial audio to detect multiple overlapping events and estimate their spatial attributes, Direction of Arrival (DoA) and source distance, at the frame level. To generalize to unseen events, we incorporate an audio grounding model that aligns audio features with semantic audio class text embeddings via a cross-attention mechanism. Second, to answer complex queries about dynamic audio scenes involving moving sources, we condition a large language model (LLM) on structured spatial attributes extracted by our model. Finally, we introduce a spatial audio motion understanding and reasoning benchmark dataset and demonstrate our framework's performance against the baseline model.