CLIP-RL: Surgical Scene Segmentation Using Contrastive Language-Vision Pretraining & Reinforcement Learning

📅 2025-07-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Robust instrument and tissue segmentation in minimally invasive surgery (MIS) videos remains challenging due to severe occlusions, texture variations, and dynamic illumination changes. Method: This paper proposes an end-to-end semantic segmentation framework that synergistically integrates Contrastive Language–Image Pretraining (CLIP) with reinforcement learning (RL). CLIP serves as a multimodal feature encoder, while a policy-network-driven RL module—augmented by curriculum learning—dynamically optimizes mask generation to adapt to complex optical conditions. Contribution/Results: To our knowledge, this is the first work to jointly leverage CLIP’s open-vocabulary semantic understanding and RL’s sequential decision-making capability for MIS segmentation. Evaluated on the EndoVis 2018 and 2017 datasets, our method achieves mean Intersection-over-Union (mIoU) scores of 81.0% and 74.12%, respectively—substantially outperforming existing state-of-the-art approaches.

Technology Category

Application Category

📝 Abstract
Understanding surgical scenes can provide better healthcare quality for patients, especially with the vast amount of video data that is generated during MIS. Processing these videos generates valuable assets for training sophisticated models. In this paper, we introduce CLIP-RL, a novel contrastive language-image pre-training model tailored for semantic segmentation for surgical scenes. CLIP-RL presents a new segmentation approach which involves reinforcement learning and curriculum learning, enabling continuous refinement of the segmentation masks during the full training pipeline. Our model has shown robust performance in different optical settings, such as occlusions, texture variations, and dynamic lighting, presenting significant challenges. CLIP model serves as a powerful feature extractor, capturing rich semantic context that enhances the distinction between instruments and tissues. The RL module plays a pivotal role in dynamically refining predictions through iterative action-space adjustments. We evaluated CLIP-RL on the EndoVis 2018 and EndoVis 2017 datasets. CLIP-RL achieved a mean IoU of 81%, outperforming state-of-the-art models, and a mean IoU of 74.12% on EndoVis 2017. This superior performance was achieved due to the combination of contrastive learning with reinforcement learning and curriculum learning.
Problem

Research questions and friction points this paper is trying to address.

Segment surgical scenes using contrastive language-vision pretraining
Refine segmentation masks via reinforcement and curriculum learning
Handle challenges like occlusions and dynamic lighting in MIS
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines contrastive learning with reinforcement learning
Uses curriculum learning for continuous mask refinement
Leverages CLIP for semantic feature extraction
🔎 Similar Papers
2024-05-16International Conference on Medical Image Computing and Computer-Assisted InterventionCitations: 6
F
Fatmaelzahraa Ali Ahmed
Department of Surgery, Hamad Medical Corporation, Doha, Qatar
M
Muhammad Arsalan
Department of Computer Science and Engineering, Qatar University, Doha, Qatar
Abdulaziz Al-Ali
Abdulaziz Al-Ali
Qatar University
Machine LearningArtificial Neural NetworksApplied Artificial Intelligence
K
Khalid Al-Jalham
Department of Surgery, Hamad Medical Corporation, Doha, Qatar
S
Shidin Balakrishnan
Department of Surgery, Hamad Medical Corporation, Doha, Qatar