Spatial-ORMLLM: Improve Spatial Relation Understanding in the Operating Room with Multimodal Large Language Model

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing surgical scene understanding methods rely on multi-sensor 3D data or explicit 3D annotations, making fine-grained spatial relationship modeling in real-world clinical settings—where only monocular RGB video is available—challenging. This paper introduces the first end-to-end vision-language large model tailored for operating rooms, requiring no additional hardware, expert annotations, or multimodal 3D inputs; instead, it implicitly learns 3D spatial semantics from raw RGB images alone. Our core innovation is a spatially enhanced feature fusion module that deeply integrates 2D visual features with lightweight 3D spatial priors estimated from single-view imagery, embedding them into a unified multimodal large language model framework to enable cross-modal 3D scene reasoning. Evaluated on multiple clinical benchmarks, our method achieves state-of-the-art performance with strong generalization—accurately interpreting unseen spatial configurations—and effectively supports intraoperative perception, risk alerting, and decision assistance.

Technology Category

Application Category

📝 Abstract
Precise spatial modeling in the operating room (OR) is foundational to many clinical tasks, supporting intraoperative awareness, hazard avoidance, and surgical decision-making. While existing approaches leverage large-scale multimodal datasets for latent-space alignment to implicitly learn spatial relationships, they overlook the 3D capabilities of MLLMs. However, this approach raises two issues: (1) Operating rooms typically lack multiple video and audio sensors, making multimodal 3D data difficult to obtain; (2) Training solely on readily available 2D data fails to capture fine-grained details in complex scenes. To address this gap, we introduce Spatial-ORMLLM, the first large vision-language model for 3D spatial reasoning in operating rooms using only RGB modality to infer volumetric and semantic cues, enabling downstream medical tasks with detailed and holistic spatial context. Spatial-ORMLLM incorporates a Spatial-Enhanced Feature Fusion Block, which integrates 2D modality inputs with rich 3D spatial knowledge extracted by the estimation algorithm and then feeds the combined features into the visual tower. By employing a unified end-to-end MLLM framework, it combines powerful spatial features with textual features to deliver robust 3D scene reasoning without any additional expert annotations or sensor inputs. Experiments on multiple benchmark clinical datasets demonstrate that Spatial-ORMLLM achieves state-of-the-art performance and generalizes robustly to previously unseen surgical scenarios and downstream tasks.
Problem

Research questions and friction points this paper is trying to address.

Improving spatial relation understanding in operating rooms using MLLMs
Addressing lack of 3D data in ORs with RGB-only modality
Enhancing 3D scene reasoning without expert annotations or extra sensors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses RGB modality for 3D spatial reasoning
Integrates 2D inputs with 3D spatial knowledge
Unified end-to-end MLLM framework without annotations
🔎 Similar Papers
No similar papers found.
P
Peiqi He
Hunan University
Z
Zhenhao Zhang
ShanghaiTech University
Y
Yixiang Zhang
Hunan University
X
Xiongjun Zhao
Hunan University
Shaoliang Peng
Shaoliang Peng
Cheung Kong Professor, Hunan University
High Performance ComputingBig DataBioinformaticsAI