🤖 AI Summary
Existing representation learning methods—such as kernel-based approaches and deep neural networks—are typically opaque and lack semantic interpretability. Method: This paper proposes an end-to-end differentiable Takagi–Sugeno–Kang (TSK) fuzzy representation learning framework explicitly designed for interpretability. It maps inputs into a high-dimensional feature space via semantically transparent fuzzy rule antecedents, while optimizing rule consequents using parameterized differentiable modules—enabling the first seamless integration of TSK systems with gradient-based optimization. A piecewise-differentiable learning mechanism preserves classical fuzzy logic optimization principles, and second-order manifold geometric constraints are incorporated to enhance robustness. Contribution/Results: Evaluated on multiple benchmark datasets, the method significantly outperforms state-of-the-art black-box models in both classification and clustering tasks. It achieves superior accuracy while guaranteeing full semantic interpretability—demonstrating a synergistic enhancement of performance and transparency.
📝 Abstract
Representation learning has emerged as a crucial focus in machine and deep learning, involving the extraction of meaningful and useful features and patterns from the input data, thereby enhancing the performance of various downstream tasks such as classification, clustering, and prediction. Current mainstream representation learning methods primarily rely on non-linear data mining techniques such as kernel methods and deep neural networks to extract abstract knowledge from complex datasets. However, most of these methods are black-box, lacking transparency and interpretability in the learning process, which constrains their practical utility. To this end, this paper introduces a novel representation learning method grounded in an interpretable fuzzy rule-based model. Specifically, it is built upon the Takagi-Sugeno-Kang fuzzy system (TSK-FS) to initially map input data to a high-dimensional fuzzy feature space through the antecedent part of the TSK-FS. Subsequently, a novel differentiable optimization method is proposed for the consequence part learning which can preserve the model's interpretability and transparency while further exploring the nonlinear relationships within the data. This optimization method retains the essence of traditional optimization, with certain parts of the process parameterized corresponding differentiable modules constructed, and a deep optimization process implemented. Consequently, this method not only enhances the model's performance but also ensures its interpretability. Moreover, a second-order geometry preservation method is introduced to further improve the robustness of the proposed method. Extensive experiments conducted on various benchmark datasets validate the superiority of the proposed method, highlighting its potential for advancing representation learning methodologies.