🤖 AI Summary
Conventional feature selection methods in positive-unlabeled (PU) learning suffer from statistical bias in positive-class characterization, as they erroneously treat unlabeled instances as negative examples. Method: This paper proposes the first binary optimization-based feature selection framework explicitly incorporating the clustering assumption—leveraging the prior that positive instances naturally cluster in an optimal feature space. The method designs a cluster-structure-driven objective function that avoids imposing incorrect label assumptions on unlabeled data, enabling unsupervised feature selection under the PU setting. Contribution/Results: By unifying principles from PU learning and unsupervised clustering, the approach demonstrates robustness on synthetic data and significantly outperforms ten state-of-the-art baselines across three standard PU benchmarks. Notably, it maintains consistent performance gains even when the clustering assumption holds only weakly, underscoring its practical reliability for downstream classification.
📝 Abstract
Feature selection is essential for efficient data mining and sometimes encounters the positive-unlabeled (PU) learning scenario, where only a few positive labels are available, while most data remains unlabeled. In certain real-world PU learning tasks, data subjected to adequate feature selection often form clusters with concentrated positive labels. Conventional feature selection methods that treat unlabeled data as negative may fail to capture the statistical characteristics of positive data in such scenarios, leading to suboptimal performance. To address this, we propose a novel feature selection method based on the cluster assumption in PU learning, called FSCPU. FSCPU formulates the feature selection problem as a binary optimization task, with an objective function explicitly designed to incorporate the cluster assumption in the PU learning setting. Experiments on synthetic datasets demonstrate the effectiveness of FSCPU across various data conditions. Moreover, comparisons with 10 conventional algorithms on three open datasets show that FSCPU achieves competitive performance in downstream classification tasks, even when the cluster assumption does not strictly hold.