π€ AI Summary
This study addresses a critical limitation in existing unsupervised feature selection methods, which are predominantly evaluated under single-label settingsβa practice prone to performance comparison bias due to the arbitrary choice of labels. To overcome this issue, the work systematically exposes the shortcomings of the conventional evaluation paradigm and proposes replacing it with a multi-label classification framework, thereby establishing a more equitable and reliable assessment protocol. Extensive cross-method and cross-dataset experiments conducted on 21 real-world multi-label datasets demonstrate that the relative performance rankings of feature selection algorithms shift substantially under the proposed paradigm, strongly validating the necessity and effectiveness of the multi-label evaluation strategy.
π Abstract
Unsupervised feature selection aims to identify a compact subset of features that captures the intrinsic structure of data without supervised label. Most existing studies evaluate the performance of methods using the single-label dataset that can be instantiated by selecting a label from multi-label data while maintaining the original features. Because the chosen label can vary arbitrarily depending on the experimental setting, the superiority among compared methods can be changed with regard to which label happens to be selected. Thus, evaluating unsupervised feature selection methods based solely on single-label accuracy is unreasonable for assessing their true discriminative ability. This study revisits this evaluation paradigm by adopting a multi-label classification framework. Experiments on 21 multi-label datasets using several representative methods demonstrate that performance rankings differ markedly from those reported under single-label settings, suggesting the possibility of multi-label evaluation settings for fair and reliable comparison of unsupervised feature selection methods.