🤖 AI Summary
This work addresses the challenge that arbitrary data augmentation in medical imaging may disrupt disease-relevant biomarkers, thereby degrading contrastive learning performance. To mitigate this issue, the authors propose an unsupervised method for generating disease severity labels based on anomaly detection and gradient response, which guides supervised contrastive learning. Notably, this approach leverages gradient responses to construct fine-grained positive and negative sample pairs from unlabeled OCT images, effectively preserving semantic information in pathological regions. Evaluated on a diabetic retinopathy biomarker classification task, the proposed method achieves up to a 6% improvement in accuracy over self-supervised baselines, significantly enhancing both the discriminability of learned representations and their clinical interpretability.
📝 Abstract
In this paper, we propose a novel selection strategy for contrastive learning for medical images. On natural images, contrastive learning uses augmentations to select positive and negative pairs for the contrastive loss. However, in the medical domain, arbitrary augmentations have the potential to distort small localized regions that contain the biomarkers we are interested in detecting. A more intuitive approach is to select samples with similar disease severity characteristics, since these samples are more likely to have similar structures related to the progression of a disease. To enable this, we introduce a method that generates disease severity labels for unlabeled OCT scans on the basis of gradient responses from an anomaly detection algorithm. These labels are used to train a supervised contrastive learning setup to improve biomarker classification accuracy by as much as 6% above self-supervised baselines for key indicators of Diabetic Retinopathy.