🤖 AI Summary
Existing reasoning segmentation (RS) methods struggle with multi-step and complex spatiotemporal reasoning, suffer from catastrophic forgetting due to reliance on LLM fine-tuning, and lack support for online video streams. To address these limitations, we propose the first LLM-fine-tuning-free online video reasoning segmentation framework. Our approach introduces an “on-demand digital twin” mechanism: an LLM dynamically plans and orchestrates lightweight, task-specific vision models to construct scene representations incrementally, thereby decoupling perception from reasoning. It integrates streaming video processing with collaborative inference across multiple expert models. Evaluated on a newly constructed benchmark comprising 200 videos and 895 reasoning queries, our method achieves an average IoU improvement of 12.6% across semantic, spatial, and temporal reasoning tasks—significantly outperforming state-of-the-art approaches.
📝 Abstract
Reasoning segmentation (RS) aims to identify and segment objects of interest based on implicit text queries. As such, RS is a catalyst for embodied AI agents, enabling them to interpret high-level commands without requiring explicit step-by-step guidance. However, current RS approaches rely heavily on the visual perception capabilities of multimodal large language models (LLMs), leading to several major limitations. First, they struggle with queries that require multiple steps of reasoning or those that involve complex spatial/temporal relationships. Second, they necessitate LLM fine-tuning, which may require frequent updates to maintain compatibility with contemporary LLMs and may increase risks of catastrophic forgetting during fine-tuning. Finally, being primarily designed for static images or offline video processing, they scale poorly to online video data. To address these limitations, we propose an agent framework that disentangles perception and reasoning for online video RS without LLM fine-tuning. Our innovation is the introduction of a just-in-time digital twin concept, where -- given an implicit query -- a LLM plans the construction of a low-level scene representation from high-level video using specialist vision models. We refer to this approach to creating a digital twin as"just-in-time"because the LLM planner will anticipate the need for specific information and only request this limited subset instead of always evaluating every specialist model. The LLM then performs reasoning on this digital twin representation to identify target objects. To evaluate our approach, we introduce a new comprehensive video reasoning segmentation benchmark comprising 200 videos with 895 implicit text queries. The benchmark spans three reasoning categories (semantic, spatial, and temporal) with three different reasoning chain complexity.