🤖 AI Summary
In positive-unlabeled (PU) learning, unreliable supervision impedes discriminative representation learning. To address this, we propose NcPU—a non-contrastive framework that requires neither auxiliary negative samples nor prior parameters. Its core components are a noise-robust non-contrastive loss (NoiSNCL) and a regret-based pseudo-label disambiguation mechanism (PLD), jointly optimized within an EM framework to enable stable representation alignment without auxiliary information—achieved for the first time. NcPU integrates non-contrastive learning, representation alignment, and regret-driven label correction, supported by theoretical analysis and iterative optimization. On challenging benchmarks such as CIFAR-100, it significantly outperforms existing PU methods, narrowing the performance gap by 14.26%. Furthermore, it demonstrates practical utility in a real-world post-disaster building damage assessment task.
📝 Abstract
Positive-Unlabeled (PU) learning aims to train a binary classifier (positive vs. negative) where only limited positive data and abundant unlabeled data are available. While widely applicable, state-of-the-art PU learning methods substantially underperform their supervised counterparts on complex datasets, especially without auxiliary negatives or pre-estimated parameters (e.g., a 14.26% gap on CIFAR-100 dataset). We identify the primary bottleneck as the challenge of learning discriminative representations under unreliable supervision. To tackle this challenge, we propose NcPU, a non-contrastive PU learning framework that requires no auxiliary information. NcPU combines a noisy-pair robust supervised non-contrastive loss (NoiSNCL), which aligns intra-class representations despite unreliable supervision, with a phantom label disambiguation (PLD) scheme that supplies conservative negative supervision via regret-based label updates. Theoretically, NoiSNCL and PLD can iteratively benefit each other from the perspective of the Expectation-Maximization framework. Empirically, extensive experiments demonstrate that: (1) NoiSNCL enables simple PU methods to achieve competitive performance; and (2) NcPU achieves substantial improvements over state-of-the-art PU methods across diverse datasets, including challenging datasets on post-disaster building damage mapping, highlighting its promise for real-world applications. Code: Code will be open-sourced after review.