🤖 AI Summary
Existing large audio language models (LALMs) accept only mono-channel input, limiting their ability to model spatial auditory cues—such as azimuth, elevation, and distance—and thereby hindering understanding and reasoning in realistic acoustic scenes. To address this, we propose Spatial-Plug, the first plug-and-play spatially aware framework, comprising a rotation-invariant first-order ambisonics (FOA) encoder and a spatial-reasoning-specific multimodal adapter. We further introduce SPUR-Set, the first spatial-relational question-answering dataset. Our method integrates FOA representations, multimodal feature alignment, and supervised spatial QA fine-tuning, trained jointly on real recordings and controllable synthetic data. Experiments demonstrate significant improvements on spatial QA and multi-speaker attribution tasks while preserving general audio understanding capabilities. Ablation studies confirm the effectiveness and orthogonality of each component.
📝 Abstract
Spatial perception is central to auditory intelligence, enabling accurate understanding of real-world acoustic scenes and advancing human-level perception of the world around us. While recent large audio-language models (LALMs) show strong reasoning over complex audios, most operate on monaural inputs and lack the ability to capture spatial cues such as direction, elevation, and distance. We introduce SPUR, a lightweight, plug-in approach that equips LALMs with spatial perception through minimal architectural changes. SPUR consists of: (i) a First-Order Ambisonics (FOA) encoder that maps (W, X, Y, Z) channels to rotation-aware, listener-centric spatial features, integrated into target LALMs via a multimodal adapter; and (ii) SPUR-Set, a spatial QA dataset combining open-source FOA recordings with controlled simulations, emphasizing relative direction, elevation, distance, and overlap for supervised spatial reasoning. Fine-tuning our model on the SPUR-Set consistently improves spatial QA and multi-speaker attribution while preserving general audio understanding. SPUR provides a simple recipe that transforms monaural LALMs into spatially aware models. Extensive ablations validate the effectiveness of our approach.