Visual-Semantic Knowledge Conflicts in Operating Rooms: Synthetic Data Curation for Surgical Risk Perception in Multimodal Large Language Models

πŸ“… 2025-06-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In operating room (OR) risk identification, multimodal large language models (MLLMs) suffer from visual-semantic knowledge conflict (VS-KC)β€”i.e., strong textual rule comprehension but poor detection of safety violations in images. To address this, we propose a diffusion-based method to synthesize rule-violating scenarios, generating 34,000 high-fidelity synthetic images containing diverse safety violations, augmented with 214 human-annotated ground-truth images, forming the first open-source OR-VSKC dataset and benchmark. This dataset systematically exposes MLLMs’ inconsistency in knowledge alignment at the violation-entity levelβ€”a previously uncharacterized deficiency. Fine-tuned models achieve significant gains in detecting known violation types and demonstrate viewpoint generalization; however, performance degrades markedly on unseen entities, underscoring the critical need for cross-entity coverage in training. Our work establishes a foundational resource and diagnostic framework for advancing robust, entity-aware safety reasoning in surgical AI.

Technology Category

Application Category

πŸ“ Abstract
Surgical risk identification is critical for patient safety and reducing preventable medical errors. While multimodal large language models (MLLMs) show promise for automated operating room (OR) risk detection, they often exhibit visual-semantic knowledge conflicts (VS-KC), failing to identify visual safety violations despite understanding textual rules. To address this, we introduce a dataset comprising over 34,000 synthetic images generated by diffusion models, depicting operating room scenes containing entities that violate established safety rules. These images were created to alleviate data scarcity and examine MLLMs vulnerabilities. In addition, the dataset includes 214 human-annotated images that serve as a gold-standard reference for validation. This comprehensive dataset, spanning diverse perspectives, stages, and configurations, is designed to expose and study VS-KC. Fine-tuning on OR-VSKC significantly improves MLLMs' detection of trained conflict entities and generalizes well to new viewpoints for these entities, but performance on untrained entity types remains poor, highlighting learning specificity and the need for comprehensive training. The main contributions of this work include: (1) a data generation methodology tailored for rule-violation scenarios; (2) the release of the OR-VSKC dataset and its associated benchmark as open-source resources; and (3) an empirical analysis of violation-sensitive knowledge consistency in representative MLLMs. The dataset and appendix are available at https://github.com/zgg2577/VS-KC.
Problem

Research questions and friction points this paper is trying to address.

Detect visual safety violations in operating rooms using MLLMs
Address visual-semantic knowledge conflicts in surgical risk perception
Improve MLLMs' generalization for new safety violation scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic images generated by diffusion models
Dataset with human-annotated gold-standard references
Fine-tuning MLLMs for improved risk detection
πŸ”Ž Similar Papers
No similar papers found.
W
Weiyi Zhao
Shanghai University of Engineering Science, Shanghai, China
X
Xiaoyu Tan
INFLY TECH (Shanghai) Co., Ltd., Shanghai, China
L
Liang Liu
Clinical Research Unit, Zhongshan Hospital of Fudan University, Shanghai, China
Sijia Li
Sijia Li
Institute of Information Engineering, Chinese Academy of Sciences
Youwei Song
Youwei Song
Shanghai University of Engineering Science, Shanghai, China
Xihe Qiu
Xihe Qiu
Associate Professor, Shanghai University of Engineering Science
AI for HealthcareVision-Language ModelsReinforcement LearningLarge Language Models