🤖 AI Summary
Existing fuzzy decision system (FDS)-based feature selection methods suffer from two key limitations: (1) evaluation criteria are decoupled from downstream learning performance, and (2) inter-class relationships are modeled solely via undirected Euclidean distance, neglecting spatial directionality and instance distribution effects on decision boundaries. To address these issues, we propose a spatially aware, separability-driven feature selection framework that— for the first time—unifies intra-class compactness, inter-class separability, scalar distance, and spatial directional information to explicitly characterize class structure and sharpen decision boundaries. Our method integrates this separability criterion into an FDS via a forward greedy search strategy. Extensive experiments on ten real-world datasets demonstrate that the proposed approach significantly outperforms eight state-of-the-art algorithms, achieving consistent improvements in classification accuracy, clustering performance, and feature interpretability.
📝 Abstract
Feature selection is crucial for fuzzy decision systems (FDSs), as it identifies informative features and eliminates rule redundancy, thereby enhancing predictive performance and interpretability. Most existing methods either fail to directly align evaluation criteria with learning performance or rely solely on non-directional Euclidean distances to capture relationships among decision classes, which limits their ability to clarify decision boundaries. However, the spatial distribution of instances has a potential impact on the clarity of such boundaries. Motivated by this, we propose Spatially-aware Separability-driven Feature Selection (S$^2$FS), a novel framework for FDSs guided by a spatially-aware separability criterion. This criterion jointly considers within-class compactness and between-class separation by integrating scalar-distances with spatial directional information, providing a more comprehensive characterization of class structures. S$^2$FS employs a forward greedy strategy to iteratively select the most discriminative features. Extensive experiments on ten real-world datasets demonstrate that S$^2$FS consistently outperforms eight state-of-the-art feature selection algorithms in both classification accuracy and clustering performance, while feature visualizations further confirm the interpretability of the selected features.