Feature selection based on cluster assumption in PU learning

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional feature selection methods in positive-unlabeled (PU) learning suffer from statistical bias in positive-class characterization, as they erroneously treat unlabeled instances as negative examples. Method: This paper proposes the first binary optimization-based feature selection framework explicitly incorporating the clustering assumption—leveraging the prior that positive instances naturally cluster in an optimal feature space. The method designs a cluster-structure-driven objective function that avoids imposing incorrect label assumptions on unlabeled data, enabling unsupervised feature selection under the PU setting. Contribution/Results: By unifying principles from PU learning and unsupervised clustering, the approach demonstrates robustness on synthetic data and significantly outperforms ten state-of-the-art baselines across three standard PU benchmarks. Notably, it maintains consistent performance gains even when the clustering assumption holds only weakly, underscoring its practical reliability for downstream classification.

Technology Category

Application Category

📝 Abstract
Feature selection is essential for efficient data mining and sometimes encounters the positive-unlabeled (PU) learning scenario, where only a few positive labels are available, while most data remains unlabeled. In certain real-world PU learning tasks, data subjected to adequate feature selection often form clusters with concentrated positive labels. Conventional feature selection methods that treat unlabeled data as negative may fail to capture the statistical characteristics of positive data in such scenarios, leading to suboptimal performance. To address this, we propose a novel feature selection method based on the cluster assumption in PU learning, called FSCPU. FSCPU formulates the feature selection problem as a binary optimization task, with an objective function explicitly designed to incorporate the cluster assumption in the PU learning setting. Experiments on synthetic datasets demonstrate the effectiveness of FSCPU across various data conditions. Moreover, comparisons with 10 conventional algorithms on three open datasets show that FSCPU achieves competitive performance in downstream classification tasks, even when the cluster assumption does not strictly hold.
Problem

Research questions and friction points this paper is trying to address.

Feature selection in PU learning with limited positive labels
Overcoming bias when treating unlabeled data as negative
Enhancing performance via cluster assumption in feature selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cluster assumption-based feature selection in PU learning
Binary optimization for feature selection
Explicit cluster assumption incorporation in objective function
🔎 Similar Papers
No similar papers found.
M
Motonobu Uchikoshi
The Japan Research Institute, Limited, Shinagawa-ku, Tokyo, Japan
Youhei Akimoto
Youhei Akimoto
University of Tsukuba
OptimizationEvolutionary ComputationMachine LearningTheory of Randomized Search Heuristics