🤖 AI Summary
This work addresses the limited spatiotemporal understanding of surgical scenes in monocular laparoscopic videos by proposing a fine-tuning-free 4D reasoning framework. It constructs an explicit 4D scene representation through the integration of point tracking, monocular depth estimation, and semantic segmentation, and— for the first time—combines general-purpose 2D multimodal large language models (MLLMs) with 3D vision models to enable interpretable and traceable spatiotemporally consistent reasoning. Evaluated on a new dataset comprising 134 clinically relevant questions, the method significantly enhances 4D semantic understanding of surgical instruments and anatomical structures, achieving precise alignment between natural language responses and dynamic scene elements.
📝 Abstract
Spatiotemporal reasoning is a fundamental capability for artificial intelligence (AI) in soft tissue surgery, paving the way for intelligent assistive systems and autonomous robotics. While 2D vision-language models show increasing promise at understanding surgical video, the spatial complexity of surgical scenes suggests that reasoning systems may benefit from explicit 4D representations. Here, we propose a framework for equipping surgical agents with spatiotemporal tools based on an explicit 4D representation, enabling AI systems to ground their natural language reasoning in both time and 3D space. Leveraging models for point tracking, depth, and segmentation, we develop a coherent 4D model with spatiotemporally consistent tool and tissue semantics. A Multimodal Large Language Model (MLLM) then acts as an agent on tools derived from the explicit 4D representation (e.g., trajectories) without any fine-tuning. We evaluate our method on a new dataset of 134 clinically relevant questions and find that the combination of a general purpose reasoning backbone and our 4D representation significantly improves spatiotemporal understanding and allows for 4D grounding. We demonstrate that spatiotemporal intelligence can be "assembled" from 2D MLLMs and 3D computer vision models without additional training. Code, data, and examples are available at https://tum-ai.github.io/surg4d/