A 4D Representation for Training-Free Agentic Reasoning from Monocular Laparoscopic Video

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited spatiotemporal understanding of surgical scenes in monocular laparoscopic videos by proposing a fine-tuning-free 4D reasoning framework. It constructs an explicit 4D scene representation through the integration of point tracking, monocular depth estimation, and semantic segmentation, and— for the first time—combines general-purpose 2D multimodal large language models (MLLMs) with 3D vision models to enable interpretable and traceable spatiotemporally consistent reasoning. Evaluated on a new dataset comprising 134 clinically relevant questions, the method significantly enhances 4D semantic understanding of surgical instruments and anatomical structures, achieving precise alignment between natural language responses and dynamic scene elements.
📝 Abstract
Spatiotemporal reasoning is a fundamental capability for artificial intelligence (AI) in soft tissue surgery, paving the way for intelligent assistive systems and autonomous robotics. While 2D vision-language models show increasing promise at understanding surgical video, the spatial complexity of surgical scenes suggests that reasoning systems may benefit from explicit 4D representations. Here, we propose a framework for equipping surgical agents with spatiotemporal tools based on an explicit 4D representation, enabling AI systems to ground their natural language reasoning in both time and 3D space. Leveraging models for point tracking, depth, and segmentation, we develop a coherent 4D model with spatiotemporally consistent tool and tissue semantics. A Multimodal Large Language Model (MLLM) then acts as an agent on tools derived from the explicit 4D representation (e.g., trajectories) without any fine-tuning. We evaluate our method on a new dataset of 134 clinically relevant questions and find that the combination of a general purpose reasoning backbone and our 4D representation significantly improves spatiotemporal understanding and allows for 4D grounding. We demonstrate that spatiotemporal intelligence can be "assembled" from 2D MLLMs and 3D computer vision models without additional training. Code, data, and examples are available at https://tum-ai.github.io/surg4d/
Problem

Research questions and friction points this paper is trying to address.

4D representation
spatiotemporal reasoning
monocular laparoscopic video
surgical AI
training-free reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

4D representation
training-free agentic reasoning
spatiotemporal grounding
multimodal large language model
monocular laparoscopic video
🔎 Similar Papers
No similar papers found.
M
Maximilian Fehrentz
Computer Aided Medical Procedures, TU Munich, Munich, Germany
N
Nicolas Stellwag
TUM.ai, Munich, Germany
R
Robert Wiebe
TUM.ai, Munich, Germany
N
Nicole Thorisch
TUM.ai, Munich, Germany
F
Fabian Grob
TUM.ai, Munich, Germany
P
Patrick Remerscheid
TUM.ai, Munich, Germany
K
Ken-Joel Simmoteit
TUM.ai, Munich, Germany
Benjamin D. Killeen
Benjamin D. Killeen
Postdoc, Technical University of Munich
Surgical Data ScienceMedical AIRoboticsSimulation
Christian Heiliger
Christian Heiliger
MD LMU Munich
Surgical AI
Nassir Navab
Nassir Navab
Professor of Computer Science, Technische Universität München