A filtering scheme for confocal laser endomicroscopy (CLE)-video sequences for self-supervised learning

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High inter-frame correlation in confocal laser endomicroscopy (CLE) videos leads to skewed data distributions and inefficient training in self-supervised learning. Method: We propose the first self-supervised pre-processing filtering method specifically designed for CLE videos, explicitly reducing inter-frame redundancy to enhance sample diversity and distribution balance. Our approach employs a lightweight Vision Transformer-based teacher-student framework. Validation is conducted on two clinical datasets: nasal cavity tumors and cutaneous squamous cell carcinoma. Results: After filtering, model accuracy reaches 67.48% and 73.52%, respectively—significantly outperforming non-self-supervised baselines. Training time is reduced by 67%, convergence accelerates markedly, and generalization improves. This work establishes the first efficient, transferable data pre-processing paradigm for self-supervised learning on CLE video data.

Technology Category

Application Category

📝 Abstract
Confocal laser endomicroscopy (CLE) is a non-invasive, real-time imaging modality that can be used for in-situ, in-vivo imaging and the microstructural analysis of mucous structures. The diagnosis using CLE is, however, complicated by images being hard to interpret for non-experienced physicians. Utilizing machine learning as an augmentative tool would hence be beneficial, but is complicated by the shortage of histopathology-correlated CLE imaging sequences with respect to the plurality of patterns in this domain, leading to overfitting of machine learning models. To overcome this, self-supervised learning (SSL) can be employed on larger unlabeled datasets. CLE is a video-based modality with high inter-frame correlation, leading to a non-stratified data distribution for SSL training. In this work, we propose a filter functionality on CLE video sequences to reduce the dataset redundancy in SSL training and improve SSL training convergence and training efficiency. We use four state-of-the-art baseline networks and a SSL teacher-student network with a vision transformer small backbone for the evaluation. These networks were evaluated on downstream tasks for a sinonasal tumor dataset and a squamous cell carcinoma of the skin dataset. On both datasets, we found the highest test accuracy on the filtered SSL-pretrained model, with 67.48% and 73.52%, both considerably outperforming their non-SSL baselines. Our results show that SSL is an effective method for CLE pretraining. Further, we show that our proposed CLE video filter can be utilized to improve training efficiency in self-supervised scenarios, resulting in a reduction of 67% in training time.
Problem

Research questions and friction points this paper is trying to address.

Reducing CLE video redundancy for self-supervised learning efficiency
Overcoming data shortage and overfitting in CLE medical imaging
Improving training convergence for CLE diagnosis using filtered sequences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Filtering scheme reduces redundancy in CLE videos
Self-supervised learning with teacher-student network
Vision transformer backbone improves training efficiency
🔎 Similar Papers
No similar papers found.
N
Nils Porsche
Flensburg University of Applied Sciences, Flensburg, Germany
F
Flurin Müller-Diesing
University Hospital RWTH Aachen, Department of Otorhinolaryngology, Aachen, Germany
Sweta Banerjee
Sweta Banerjee
Research Assistant - Flensburg University of Applied Sciences
self-supervised learningdomain adaptationmulti-modal approaches in histopathology
M
Miguel Goncalves
Department of Otorhinolaryngology, Plastic and Aesthetic Operations, University Hospital Würzburg, Würzburg, Germany
Marc Aubreville
Marc Aubreville
Professor at Flensburg University of Applied Sciences, Flensburg, Germany
Computer VisionDeep LearningSignal Processing