🤖 AI Summary
This work proposes PhaseCoder, a pure Transformer-based audio encoder that addresses the limitations of existing audio models, which often neglect spatial information or rely on fixed microphone array geometries, thereby hindering deployment across diverse hardware configurations. PhaseCoder jointly processes raw multi-channel audio signals and their corresponding 3D microphone coordinates to produce geometry-agnostic spatial audio embeddings. This approach enables, for the first time, microphone-independent spatial audio understanding and seamlessly integrates spatial audio tokens into multimodal large language models such as Gemma 3B, facilitating complex spatial reasoning and directional speech transcription. Experimental results demonstrate that PhaseCoder achieves state-of-the-art performance on microphone-agnostic sound source localization benchmarks and successfully empowers large language models to perform spatial audio understanding tasks.
📝 Abstract
Current multimodal LLMs process audio as a mono stream, ignoring the rich spatial information essential for embodied AI. Existing spatial audio models, conversely, are constrained to fixed microphone geometries, preventing deployment across diverse devices. We present PhaseCoder, a transformer-only spatial audio encoder that is agnostic to microphone geometry. PhaseCoder takes raw multichannel audio and microphone coordinates as inputs to perform localization and produces robust spatial embeddings. We demonstrate that Gemma 3n LLM can be fine-tuned to reason over"Spatial Audio Tokens"produced by PhaseCoder. We show our encoder achieves state-of-the-art results on microphone-invariant localization benchmarks and, for the first time, enables an LLM to perform complex spatial reasoning and targeted transcription tasks from an arbitrary microphone array.