BAT: Learning to Reason about Spatial Sounds with Large Language Models

๐Ÿ“… 2024-02-02
๐Ÿ›๏ธ International Conference on Machine Learning
๐Ÿ“ˆ Citations: 14
โœจ Influential: 4
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the scarcity of real-world spatial audio data and the lack of spatial-semantic reasoning capabilities in existing models, this paper proposes the first framework integrating binaural acoustic perception with language-based reasoning. Methodologically, we design Spatial-ASTโ€”a spatial audio encoderโ€”and introduce SpatialSoundQA, the first in-the-wild, binaural spatial audio question-answering dataset. Furthermore, we pioneer the use of a large language model (LLaMA-2 7B) for spatial causal and multi-step relational reasoning. Contributions include: (1) a paradigm shift from conventional sound event localization and detection (SELD) toward spatial understanding and semantic reasoning; (2) Spatial-AST achieving state-of-the-art performance in sound event detection, localization, and distance estimation; and (3) our BAT model significantly outperforming baselines on both spatial perception and multi-step reasoning tasks.

Technology Category

Application Category

๐Ÿ“ Abstract
Spatial sound reasoning is a fundamental human skill, enabling us to navigate and interpret our surroundings based on sound. In this paper we present BAT, which combines the spatial sound perception ability of a binaural acoustic scene analysis model with the natural language reasoning capabilities of a large language model (LLM) to replicate this innate ability. To address the lack of existing datasets of in-the-wild spatial sounds, we synthesized a binaural audio dataset using AudioSet and SoundSpaces 2.0. Next, we developed SpatialSoundQA, a spatial sound-based question-answering dataset, offering a range of QA tasks that train BAT in various aspects of spatial sound perception and reasoning. The acoustic front end encoder of BAT is a novel spatial audio encoder named Spatial Audio Spectrogram Transformer, or Spatial-AST, which by itself achieves strong performance across sound event detection, spatial localization, and distance estimation. By integrating Spatial-AST with LLaMA-2 7B model, BAT transcends standard Sound Event Localization and Detection (SELD) tasks, enabling the model to reason about the relationships between the sounds in its environment. Our experiments demonstrate BAT's superior performance on both spatial sound perception and reasoning, showcasing the immense potential of LLMs in navigating and interpreting complex spatial audio environments.
Problem

Research questions and friction points this paper is trying to address.

Replicating human spatial sound reasoning using LLMs and binaural models
Addressing lack of datasets with synthesized binaural audio data
Enhancing SELD tasks with integrated audio-language reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines binaural audio analysis with LLM reasoning
Uses synthesized dataset from AudioSet and SoundSpaces
Introduces Spatial-AST encoder for audio processing
๐Ÿ”Ž Similar Papers
No similar papers found.
Zhisheng Zheng
Zhisheng Zheng
The University of Texas at Austin
Speech and Language ProcessingNatural Language ProcessingMultimodal Learning
P
Puyuan Peng
Department of Computer Science, University of Texas at Austin, USA
Z
Ziyang Ma
Department of Computer Science and Engineering, Shanghai Jiao Tong University, China
X
Xie Chen
Department of Computer Science and Engineering, Shanghai Jiao Tong University, China
Eunsol Choi
Eunsol Choi
New York University
natural language processingmachine learning
David Harwath
David Harwath
The University of Texas at Austin
Speech and Language ProcessingComputer VisionNatural Language ProcessingArtificial IntelligenceMachine Learning