Understanding Contrastive Representation Learning from Positive Unlabeled (PU) Data

📅 2024-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses representation learning under positive-unlabeled (PU) learning, where only a small number of positive examples and a large unlabeled dataset—containing both positive and negative instances—are available. We propose puCL, a novel framework comprising: (i) an unbiased variance-reduced contrastive loss (puCL); (ii) a prior-aware puNCE reweighting mechanism; and (iii) a PU-aware pseudo-label clustering algorithm. Theoretically, we provide rigorous bias-variance analysis, convergence guarantees, and generalization bounds. Empirically, puCL achieves state-of-the-art performance across standard PU benchmarks, notably improving classification accuracy under extremely low supervision—e.g., with only 10 positive samples—demonstrating both the efficacy of theory-guided design and strong generalization capability.

Technology Category

Application Category

📝 Abstract
Pretext Invariant Representation Learning (PIRL) followed by Supervised Fine-Tuning (SFT) has become a standard paradigm for learning with limited labels. We extend this approach to the Positive Unlabeled (PU) setting, where only a small set of labeled positives and a large unlabeled pool -- containing both positives and negatives are available. We study this problem under two regimes: (i) without access to the class prior, and (ii) when the prior is known or can be estimated. We introduce Positive Unlabeled Contrastive Learning (puCL), an unbiased and variance reducing contrastive objective that integrates weak supervision from labeled positives judiciously into the contrastive loss. When the class prior is known, we propose Positive Unlabeled InfoNCE (puNCE), a prior-aware extension that re-weights unlabeled samples as soft positive negative mixtures. For downstream classification, we develop a pseudo-labeling algorithm that leverages the structure of the learned embedding space via PU aware clustering. Our framework is supported by theory; offering bias-variance analysis, convergence insights, and generalization guarantees via augmentation concentration; and validated empirically across standard PU benchmarks, where it consistently outperforms existing methods, particularly in low-supervision regimes.
Problem

Research questions and friction points this paper is trying to address.

Extends contrastive learning to Positive Unlabeled (PU) data setting
Develops unbiased puCL and prior-aware puNCE contrastive objectives
Proposes PU-aware clustering for downstream classification tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

puCL integrates weak supervision into contrastive loss
puNCE re-weights unlabeled samples using class prior
PU aware clustering improves pseudo-labeling for classification
🔎 Similar Papers
No similar papers found.