Mutual Information guided Visual Contrastive Learning

📅 2025-10-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing contrastive learning approaches rely on hand-crafted data augmentations (e.g., color jittering) guided by human priors, which often misalign with real-world data distributions and hinder generalization in open-world settings. To address this, we propose a novel data-pairing paradigm grounded in mutual information estimation over authentic data distributions: natural image patches exhibiting high mutual information under realistic perturbations are selected as positive pairs, replacing heuristic augmentation policies. This work is the first to systematically integrate mutual information as a principled criterion into both data selection and augmentation stages of contrastive learning, enabling distribution-driven positive pair construction within the InfoNCE framework. We validate our method across mainstream architectures—including ResNet and ViT—and multiple benchmarks (ImageNet, CIFAR, DomainNet). Results demonstrate consistent improvements in downstream task performance and cross-domain generalization, offering an interpretable, data-driven pathway for self-supervised representation learning.

Technology Category

Application Category

📝 Abstract
Representation learning methods utilizing the InfoNCE loss have demonstrated considerable capacity in reducing human annotation effort by training invariant neural feature extractors. Although different variants of the training objective adhere to the information maximization principle between the data and learned features, data selection and augmentation still rely on human hypotheses or engineering, which may be suboptimal. For instance, data augmentation in contrastive learning primarily focuses on color jittering, aiming to emulate real-world illumination changes. In this work, we investigate the potential of selecting training data based on their mutual information computed from real-world distributions, which, in principle, should endow the learned features with better generalization when applied in open environments. Specifically, we consider patches attached to scenes that exhibit high mutual information under natural perturbations, such as color changes and motion, as positive samples for learning with contrastive loss. We evaluate the proposed mutual-information-informed data augmentation method on several benchmarks across multiple state-of-the-art representation learning frameworks, demonstrating its effectiveness and establishing it as a promising direction for future research.
Problem

Research questions and friction points this paper is trying to address.

Selecting training data using mutual information from real-world distributions
Improving generalization of learned features in open environments
Enhancing contrastive learning with natural perturbation-based positive samples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mutual information guides contrastive learning data selection
Patches with high mutual information under perturbations used
Method evaluated across multiple representation learning frameworks
🔎 Similar Papers
No similar papers found.