JAEGER: Joint 3D Audio-Visual Grounding and Reasoning in Simulated Physical Environments

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing audio-visual large language models, which rely on 2D perception and struggle to perform accurate sound source localization and spatial reasoning in complex 3D environments. To overcome this, the authors propose a 3D audio-visual large language model framework that integrates RGB-D visual inputs with multi-channel first-order Ambisonics audio. A key innovation is the introduction of Neural Intensity Vectors (Neural IV) to enhance directional acoustic cues. Additionally, they construct SpatialSceneQA, a new benchmark comprising 61,000 samples, for 3D instruction tuning and evaluation. Experimental results demonstrate that the proposed method significantly outperforms 2D baselines across multiple spatial perception and reasoning tasks, underscoring the critical importance of explicit 3D modeling for embodied intelligent systems.

Technology Category

Application Category

📝 Abstract
Current audio-visual large language models (AV-LLMs) are predominantly restricted to 2D perception, relying on RGB video and monaural audio. This design choice introduces a fundamental dimensionality mismatch that precludes reliable source localization and spatial reasoning in complex 3D environments. We address this limitation by presenting JAEGER, a framework that extends AV-LLMs to 3D space, to enable joint spatial grounding and reasoning through the integration of RGB-D observations and multi-channel first-order ambisonics. A core contribution of our work is the neural intensity vector (Neural IV), a learned spatial audio representation that encodes robust directional cues to enhance direction-of-arrival estimation, even in adverse acoustic scenarios with overlapping sources. To facilitate large-scale training and systematic evaluation, we propose SpatialSceneQA, a benchmark of 61k instruction-tuning samples curated from simulated physical environments. Extensive experiments demonstrate that our approach consistently surpasses 2D-centric baselines across diverse spatial perception and reasoning tasks, underscoring the necessity of explicit 3D modelling for advancing AI in physical environments. Our source code, pre-trained model checkpoints and datasets will be released upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

3D audio-visual grounding
spatial reasoning
audio-visual large language models
source localization
dimensionality mismatch
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D audio-visual grounding
Neural Intensity Vector
first-order ambisonics
spatial reasoning
AV-LLM