🤖 AI Summary
This work addresses a critical limitation of current large audio language models, which process only monaural input and lack the ability to perceive spatial sound cues—fundamental for comprehensive acoustic scene understanding. To bridge this gap, the study introduces spatial intelligence into large audio language models for the first time, proposing a hierarchical Auditory Scene Analysis (ASA) framework. This framework leverages a synthesized binaural audio dataset, integrates parallel semantic and spatial encoders with a dense fusion mechanism, and employs a progressive training paradigm based on Group Relative Policy Optimization (GRPO), combining supervised fine-tuning with curriculum learning. The approach substantially enhances the model’s spatial auditory comprehension, advancing large audio models beyond monaural semantic recognition toward holistic, full-scene acoustic analysis.
📝 Abstract
Existing large audio-language models perceive the world as"mono"-a single stream of audio that ignores the critical spatial dimension ("where") required for universal audio scene analysis (ASA). To bridge this gap, we first introduce a hierarchical framework for audio scene analysis. Guided by this framework, we introduce a system that enables large audio-language models (LALMs) to understand and reason about the complex acoustic world. Our system endows LALMs with universal spatial understanding through four key innovations: (1) A scalable simulation pipeline that synthesizes high-quality First-Order-Ambisonics(FOA) data; (2) A unified model framework that integrates universal spatial encoding with a dense hybrid projection mechanism to bridge the modality gap; (3) A progressive training curriculum that evolves from representation alignment to reinforcement learning-based reasoning; and (4) A comprehensive benchmark for audio scene analysis (ASA) designed to rigorously evaluate atomic perception, relational integration, and cognitive reasoning capabilities, on which our model demonstrates comparatively strong capability for spatial understanding. Our work provides a clear pathway for leveraging the powerful reasoning abilities of LALMs towards holistic ASA, advancing from"mono"semantic recognition to spatial intelligence.