🤖 AI Summary
Existing audio-visual large language models (AV-LLMs) and benchmarks predominantly focus on static or 2D scenes, lacking systematic evaluation of 3D spatial reasoning in dynamic audio-visual environments. Method: We introduce SAVVY-Bench—the first benchmark supporting synchronized spatial audio for evaluating dynamic 3D spatial relations—and propose a training-free two-stage inference framework: (1) multimodal perception-based estimation of egocentric motion trajectories, followed by (2) mapping into a global dynamic coordinate system to construct spatiotemporally consistent 3D maps. The framework tightly integrates AV-LLMs with audio-visual perception modules. Contribution/Results: Our work fills dual gaps—both in benchmarking dynamic 3D spatial understanding and in establishing a principled inference paradigm for it. Experiments demonstrate substantial performance gains on dynamic 3D spatial reasoning tasks. SAVVY-Bench and the proposed framework provide a novel evaluation standard and technical pathway for assessing and enhancing the spatial cognition capabilities of AV-LLMs.
📝 Abstract
3D spatial reasoning in dynamic, audio-visual environments is a cornerstone of human cognition yet remains largely unexplored by existing Audio-Visual Large Language Models (AV-LLMs) and benchmarks, which predominantly focus on static or 2D scenes. We introduce SAVVY-Bench, the first benchmark for 3D spatial reasoning in dynamic scenes with synchronized spatial audio. SAVVY-Bench is comprised of thousands of relationships involving static and moving objects, and requires fine-grained temporal grounding, consistent 3D localization, and multi-modal annotation. To tackle this challenge, we propose SAVVY, a novel training-free reasoning pipeline that consists of two stages: (i) Egocentric Spatial Tracks Estimation, which leverages AV-LLMs as well as other audio-visual methods to track the trajectories of key objects related to the query using both visual and spatial audio cues, and (ii) Dynamic Global Map Construction, which aggregates multi-modal queried object trajectories and converts them into a unified global dynamic map. Using the constructed map, a final QA answer is obtained through a coordinate transformation that aligns the global map with the queried viewpoint. Empirical evaluation demonstrates that SAVVY substantially enhances performance of state-of-the-art AV-LLMs, setting a new standard and stage for approaching dynamic 3D spatial reasoning in AV-LLMs.