Self-Supervised Contrastive Embedding Adaptation for Endoscopic Image Matching

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pixel-level correspondence in minimally invasive surgical endoscopic images fails due to weak perspective cues, non-Lambertian reflectance, and large tissue deformations. Method: This paper proposes a self-supervised contrastive embedding adaptation framework. It introduces (1) a novel self-supervised ground-truth generation mechanism based on novel-view synthesis, eliminating reliance on manual annotations or physical models; (2) a domain-adaptation architecture coupling the DINOv2 pre-trained model with a lightweight Transformer for fine-grained feature learning tailored to endoscopic visual characteristics; and (3) a cosine similarity thresholding strategy to enhance correspondence robustness. Results: Evaluated on the SCARED benchmark, the method significantly reduces epipolar error and achieves superior matching accuracy over state-of-the-art approaches, establishing a highly reliable foundation for intraoperative real-time navigation and 3D reconstruction.

Technology Category

Application Category

📝 Abstract
Accurate spatial understanding is essential for image-guided surgery, augmented reality integration and context awareness. In minimally invasive procedures, where visual input is the sole intraoperative modality, establishing precise pixel-level correspondences between endoscopic frames is critical for 3D reconstruction, camera tracking, and scene interpretation. However, the surgical domain presents distinct challenges: weak perspective cues, non-Lambertian tissue reflections, and complex, deformable anatomy degrade the performance of conventional computer vision techniques. While Deep Learning models have shown strong performance in natural scenes, their features are not inherently suited for fine-grained matching in surgical images and require targeted adaptation to meet the demands of this domain. This research presents a novel Deep Learning pipeline for establishing feature correspondences in endoscopic image pairs, alongside a self-supervised optimization framework for model training. The proposed methodology leverages a novel-view synthesis pipeline to generate ground-truth inlier correspondences, subsequently utilized for mining triplets within a contrastive learning paradigm. Through this self-supervised approach, we augment the DINOv2 backbone with an additional Transformer layer, specifically optimized to produce embeddings that facilitate direct matching through cosine similarity thresholding. Experimental evaluation demonstrates that our pipeline surpasses state-of-the-art methodologies on the SCARED datasets improved matching precision and lower epipolar error compared to the related work. The proposed framework constitutes a valuable contribution toward enabling more accurate high-level computer vision applications in surgical endoscopy.
Problem

Research questions and friction points this paper is trying to address.

Develops self-supervised contrastive learning for endoscopic image matching
Addresses challenges like tissue reflections and deformable anatomy in surgery
Enables accurate 3D reconstruction and camera tracking in endoscopy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised contrastive learning for feature adaptation
Novel-view synthesis generates ground-truth correspondences
Augments DINOv2 with Transformer for direct cosine matching
A
Alberto Rota
Department of Electronics, Information and Bioengineering, Politecnico di Milano, 20133 Milan, Italy
Elena De Momi
Elena De Momi
Politecnico di Milano
medical roboticscomputer visionartificial intelligencehuman robot interaction