🤖 AI Summary
Spatial audio understanding of first-order Ambisonic (FOA) signals remains challenging, particularly for scene-level semantic interpretation beyond conventional sound event detection and localization (SELD).
Method: We propose the first question-answering (QA)-based SELD paradigm, extending the task to scene-level existence verification, spatiotemporal localization, and semantic relation reasoning. We construct the first fine-grained spatiotemporal textual description and QA dataset for FOA audio; enhance linguistic diversity via rule-based generation augmented by large language model (LLM) paraphrasing and question diversification; and design a language-guided weakly supervised joint modeling framework enabling end-to-end training without frame-level annotations.
Results: On STARSS23, our method achieves performance on par with fully supervised SELD approaches using only scene-level QA supervision—demonstrating the efficacy and generalization potential of language-driven spatial audio understanding.
📝 Abstract
In this paper, we introduce a novel framework for spatial audio understanding of first-order ambisonic (FOA) signals through a question answering (QA) paradigm, aiming to extend the scope of sound event localization and detection (SELD) towards spatial scene understanding and reasoning. First, we curate and release fine-grained spatio-temporal textual descriptions for the STARSS23 dataset using a rule-based approach, and further enhance linguistic diversity using large language model (LLM)-based rephrasing. We also introduce a QA dataset aligned with the STARSS23 scenes, covering various aspects such as event presence, localization, spatial, and temporal relationships. To increase language variety, we again leverage LLMs to generate multiple rephrasings per question. Finally, we develop a baseline spatial audio QA model that takes FOA signals and natural language questions as input and provides answers regarding various occurrences, temporal, and spatial relationships of sound events in the scene formulated as a classification task. Despite being trained solely with scene-level question answering supervision, our model achieves performance that is comparable to a fully supervised sound event localization and detection model trained with frame-level spatiotemporal annotations. The results highlight the potential of language-guided approaches for spatial audio understanding and open new directions for integrating linguistic supervision into spatial scene analysis.