🤖 AI Summary
Current operating room (OR) datasets suffer from limited scale, low fidelity, and single-modality constraints, hindering progress in intelligent OR modeling. To address this, we introduce MM-OR—the first large-scale, high-fidelity, multimodal spatiotemporal OR dataset comprising over 100,000 frames, integrating RGB-D video, audio, speech transcripts, robotic logs, and 3D pose trajectories. It further provides panoramic segmentation masks, semantic scene graphs, and annotations for diverse downstream tasks. We propose the first OR-specific multimodal scene graph generation paradigm and design MM2SG, a dedicated multimodal large vision-language model. MM2SG achieves significant improvements over unimodal baselines on cross-modal reasoning and scene graph generation. This work establishes a new benchmark for holistic OR understanding, with all code and data publicly released.
📝 Abstract
Operating rooms (ORs) are complex, high-stakes environments requiring precise understanding of interactions among medical staff, tools, and equipment for enhancing surgical assistance, situational awareness, and patient safety. Current datasets fall short in scale, realism and do not capture the multimodal nature of OR scenes, limiting progress in OR modeling. To this end, we introduce MM-OR, a realistic and large-scale multimodal spatiotemporal OR dataset, and the first dataset to enable multimodal scene graph generation. MM-OR captures comprehensive OR scenes containing RGB-D data, detail views, audio, speech transcripts, robotic logs, and tracking data and is annotated with panoptic segmentations, semantic scene graphs, and downstream task labels. Further, we propose MM2SG, the first multimodal large vision-language model for scene graph generation, and through extensive experiments, demonstrate its ability to effectively leverage multimodal inputs. Together, MM-OR and MM2SG establish a new benchmark for holistic OR understanding, and open the path towards multimodal scene analysis in complex, high-stakes environments. Our code, and data is available at https://github.com/egeozsoy/MM-OR.