π€ AI Summary
In operating room (OR) risk identification, multimodal large language models (MLLMs) suffer from visual-semantic knowledge conflict (VS-KC)βi.e., strong textual rule comprehension but poor detection of safety violations in images. To address this, we propose a diffusion-based method to synthesize rule-violating scenarios, generating 34,000 high-fidelity synthetic images containing diverse safety violations, augmented with 214 human-annotated ground-truth images, forming the first open-source OR-VSKC dataset and benchmark. This dataset systematically exposes MLLMsβ inconsistency in knowledge alignment at the violation-entity levelβa previously uncharacterized deficiency. Fine-tuned models achieve significant gains in detecting known violation types and demonstrate viewpoint generalization; however, performance degrades markedly on unseen entities, underscoring the critical need for cross-entity coverage in training. Our work establishes a foundational resource and diagnostic framework for advancing robust, entity-aware safety reasoning in surgical AI.
π Abstract
Surgical risk identification is critical for patient safety and reducing preventable medical errors. While multimodal large language models (MLLMs) show promise for automated operating room (OR) risk detection, they often exhibit visual-semantic knowledge conflicts (VS-KC), failing to identify visual safety violations despite understanding textual rules. To address this, we introduce a dataset comprising over 34,000 synthetic images generated by diffusion models, depicting operating room scenes containing entities that violate established safety rules. These images were created to alleviate data scarcity and examine MLLMs vulnerabilities. In addition, the dataset includes 214 human-annotated images that serve as a gold-standard reference for validation. This comprehensive dataset, spanning diverse perspectives, stages, and configurations, is designed to expose and study VS-KC. Fine-tuning on OR-VSKC significantly improves MLLMs' detection of trained conflict entities and generalizes well to new viewpoints for these entities, but performance on untrained entity types remains poor, highlighting learning specificity and the need for comprehensive training. The main contributions of this work include: (1) a data generation methodology tailored for rule-violation scenarios; (2) the release of the OR-VSKC dataset and its associated benchmark as open-source resources; and (3) an empirical analysis of violation-sensitive knowledge consistency in representative MLLMs. The dataset and appendix are available at https://github.com/zgg2577/VS-KC.