🤖 AI Summary
In self-supervised contrastive learning, global random sampling often misclassifies semantically similar samples as negative pairs (false negatives), leading to erroneous separation in the embedding space. To address this, we propose the first online dynamic false-negative identification mechanism operating over the entire dataset, overcoming the limitations of conventional intra-batch local constraints. Our approach enables real-time, global false-negative discovery with computational overhead independent of dataset size. Methodologically, it integrates embedding-space similarity modeling, optimization-based adaptive threshold learning, and online gradient updates. Extensive experiments demonstrate substantial improvements in representation quality across image and vision-language multimodal tasks: ResNet-50 achieves a +1.8% top-1 accuracy gain under linear evaluation on ImageNet-1K. The implementation is publicly available.
📝 Abstract
In self-supervised contrastive learning, negative pairs are typically constructed using an anchor image and a sample drawn from the entire dataset, excluding the anchor. However, this approach can result in the creation of negative pairs with similar semantics, referred to as"false negatives", leading to their embeddings being falsely pushed apart. To address this issue, we introduce GloFND, an optimization-based approach that automatically learns on the fly the threshold for each anchor data to identify its false negatives during training. In contrast to previous methods for false negative discovery, our approach globally detects false negatives across the entire dataset rather than locally within the mini-batch. Moreover, its per-iteration computation cost remains independent of the dataset size. Experimental results on image and image-text data demonstrate the effectiveness of the proposed method. Our implementation is available at https://github.com/vibalcam/GloFND .