🤖 AI Summary
This work investigates whether multimodal large language models (MLLMs) can comprehend and reason about dynamic visual events in four-dimensional spacetime from an egocentric perspective. To this end, we introduce Ego-ST Bench—the first benchmark for joint spatiotemporal reasoning on egocentric videos—comprising over 5,000 question-answer pairs. We further propose ST-R1, a novel model grounded in an inverse-thinking–enhanced reinforcement learning framework that integrates long-chain-of-thought supervised fine-tuning, group-relative policy optimization (GRPO), and explicit video spatiotemporal modeling. Experimental results demonstrate that ST-R1 achieves substantial gains in joint spatiotemporal reasoning performance on Ego-ST Bench. Our contributions include: (1) the first dedicated benchmark for egocentric 4D reasoning; (2) a principled, reproducible training paradigm combining structured reasoning, policy optimization, and explicit spatiotemporal representation; and (3) a strong, open baseline model advancing embodied 4D world understanding.
📝 Abstract
Humans excel at spatio-temporal reasoning, effortlessly interpreting dynamic visual events from an egocentric viewpoint. However, whether multimodal large language models (MLLMs) can similarly comprehend the 4D world remains uncertain. This paper explores multimodal spatio-temporal reasoning from an egocentric perspective, aiming to equip MLLMs with human-like reasoning capabilities. To support this objective, we introduce Ego-ST Bench, a novel benchmark containing over 5,000 question-answer pairs across four categories, systematically evaluating spatial, temporal, and integrated spatio-temporal reasoning. Additionally, we propose the ST-R1 Video model, a video-based reasoning model that incorporates reverse thinking into its reinforcement learning process, significantly enhancing performance. We combine long-chain-of-thought (long-CoT) supervised fine-tuning with Group Relative Policy Optimization (GRPO) reinforcement learning, achieving notable improvements with limited high-quality data. Ego-ST Bench and ST-R1 provide valuable insights and resources for advancing video-based spatio-temporal reasoning research.