🤖 AI Summary
Addressing the challenges of high noise, scarce positive samples, and severe class imbalance in malicious content detection, this paper proposes an uncertainty-aware Positive-Unlabeled (PU) representation learning framework. Methodologically, it introduces a novel uncertainty-aware contrastive loss coupled with an adaptive temperature scaling mechanism, integrated with a self-attention-guided LSTM encoder to enable dynamic contrastive weight assignment and robust positive-sample anchor construction. The framework significantly enhances discriminability and robustness of the learned embedding space: downstream classifiers achieve 93.38% accuracy, precision above 0.93, and near-perfect recall—substantially reducing false negatives. Moreover, it attains superior ROC-AUC performance, demonstrating strong effectiveness and generalizability in high-noise, low-resource scenarios.
📝 Abstract
We propose the Uncertainty Contrastive Framework (UCF), a Positive-Unlabeled (PU) representation learning framework that integrates uncertainty-aware contrastive loss, adaptive temperature scaling, and a self-attention-guided LSTM encoder to improve classification under noisy and imbalanced conditions. UCF dynamically adjusts contrastive weighting based on sample confidence, stabilizes training using positive anchors, and adapts temperature parameters to batch-level variability. Applied to malicious content classification, UCF-generated embeddings enable multiple traditional classifiers to achieve more than 93.38% accuracy, precision above 0.93, and near-perfect recall, with minimal false negatives and competitive ROC-AUC scores. Visual analyses confirm clear separation between positive and unlabeled instances, highlighting the framework's ability to produce calibrated, discriminative embeddings. These results position UCF as a robust and scalable solution for PU learning in high-stakes domains such as cybersecurity and biomedical text mining.