🤖 AI Summary
To address high false-positive rates, missed detection of minute defects, and poor cross-domain generalization in surgical instrument defect detection, this paper proposes an unsupervised visual anomaly detection framework. Methodologically, it innovatively integrates background mask-guided suppression of texture interference, patch-level fine-grained analysis to enhance sensitivity to micro-defects, and a lightweight domain adaptation mechanism to mitigate domain shift caused by inter-instrument appearance variations. The framework requires no annotated defect samples, instead leveraging unsupervised segmentation and local feature modeling to accurately encode structural priors of surgical instruments. Evaluated on a real-world surgical instrument dataset, the method reduces false-positive rate by 32.7%, improves recall for small defects by 19.4%, and demonstrates strong generalization across diverse clinical scenarios. This work provides an efficient, robust, and fully automated solution for quality inspection of medical devices.
📝 Abstract
Ensuring the safety of surgical instruments requires reliable detection of visual defects. However, manual inspection is prone to error, and existing automated defect detection methods, typically trained on natural/industrial images, fail to transfer effectively to the surgical domain. We demonstrate that simply applying or fine-tuning these approaches leads to issues: false positive detections arising from textured backgrounds, poor sensitivity to small, subtle defects, and inadequate capture of instrument-specific features due to domain shift. To address these challenges, we propose a versatile method that adapts unsupervised defect detection methods specifically for surgical instruments. By integrating background masking, a patch-based analysis strategy, and efficient domain adaptation, our method overcomes these limitations, enabling the reliable detection of fine-grained defects in surgical instrument imagery.