SPUR: A Plug-and-Play Framework for Integrating Spatial Audio Understanding and Reasoning into Large Audio-Language Models

📅 2025-11-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large audio language models (LALMs) accept only mono-channel input, limiting their ability to model spatial auditory cues—such as azimuth, elevation, and distance—and thereby hindering understanding and reasoning in realistic acoustic scenes. To address this, we propose Spatial-Plug, the first plug-and-play spatially aware framework, comprising a rotation-invariant first-order ambisonics (FOA) encoder and a spatial-reasoning-specific multimodal adapter. We further introduce SPUR-Set, the first spatial-relational question-answering dataset. Our method integrates FOA representations, multimodal feature alignment, and supervised spatial QA fine-tuning, trained jointly on real recordings and controllable synthetic data. Experiments demonstrate significant improvements on spatial QA and multi-speaker attribution tasks while preserving general audio understanding capabilities. Ablation studies confirm the effectiveness and orthogonality of each component.

Technology Category

Application Category

📝 Abstract
Spatial perception is central to auditory intelligence, enabling accurate understanding of real-world acoustic scenes and advancing human-level perception of the world around us. While recent large audio-language models (LALMs) show strong reasoning over complex audios, most operate on monaural inputs and lack the ability to capture spatial cues such as direction, elevation, and distance. We introduce SPUR, a lightweight, plug-in approach that equips LALMs with spatial perception through minimal architectural changes. SPUR consists of: (i) a First-Order Ambisonics (FOA) encoder that maps (W, X, Y, Z) channels to rotation-aware, listener-centric spatial features, integrated into target LALMs via a multimodal adapter; and (ii) SPUR-Set, a spatial QA dataset combining open-source FOA recordings with controlled simulations, emphasizing relative direction, elevation, distance, and overlap for supervised spatial reasoning. Fine-tuning our model on the SPUR-Set consistently improves spatial QA and multi-speaker attribution while preserving general audio understanding. SPUR provides a simple recipe that transforms monaural LALMs into spatially aware models. Extensive ablations validate the effectiveness of our approach.
Problem

Research questions and friction points this paper is trying to address.

Existing audio-language models lack spatial perception from monaural inputs
Current models cannot capture directional, elevation and distance audio cues
There is no lightweight method to integrate spatial understanding into LALMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Plug-in spatial audio module for LALMs
FOA encoder with rotation-aware spatial features
Spatial QA dataset for supervised reasoning
🔎 Similar Papers
No similar papers found.