π€ AI Summary
This work addresses the limited 3D spatial reasoning capabilities of multimodal large language models (MLLMs) when processing video inputs, which stems from their difficulty in constructing structured three-dimensional spatial representations. Inspired by allocentric spatial reasoning theories in cognitive science, the authors propose TRACE promptingβa novel method that encodes allocentric spatial context into structured textual intermediate reasoning traces. By integrating meta-context, camera trajectories, and object entity information, TRACE generates a coherent textual representation to guide MLLMs in performing robust 3D spatial reasoning. Experiments demonstrate that this approach significantly outperforms existing prompting strategies on both VSI-Bench and OST-Bench, with consistent performance gains observed across MLLMs of varying parameter scales and training paradigms.
π Abstract
Existing Multimodal Large Language Models (MLLMs) struggle with 3D spatial reasoning, as they fail to construct structured abstractions of the 3D environment depicted in video inputs. To bridge this gap, drawing inspiration from cognitive theories of allocentric spatial reasoning, we investigate how to enable MLLMs to model and reason over text-based spatial representations of video. Specifically, we introduce Textual Representation of Allocentric Context from Egocentric Video (TRACE), a prompting method that induces MLLMs to generate text-based representations of 3D environments as intermediate reasoning traces for more accurate spatial question answering. TRACE encodes meta-context, camera trajectories, and detailed object entities to support structured spatial reasoning over egocentric videos. Extensive experiments on VSI-Bench and OST-Bench demonstrate that TRACE yields notable and consistent improvements over prior prompting strategies across a diverse range of MLLM backbones, spanning different parameter scales and training schemas. We further present ablation studies to validate our design choices, along with detailed analyses that probe the bottlenecks of 3D spatial reasoning in MLLMs.