🤖 AI Summary
Addressing the challenge of interpretable extraction of nonlinear feature interactions in high-dimensional, high-stakes tabular data, this paper proposes a three-stage framework: kernel principal component analysis (KPCA)-based signal generation, sparse polynomial distillation, and multi-objective knockoff-based variable selection. The method achieves statistically credible interpretability for the first time in self-supervised feature learning, with theoretically grounded error bounds and strict false discovery rate (FDR) control. Compared to baselines—including KPCA and sparse KPCA—it significantly improves feature selection accuracy and stability across diverse real-world regression and classification tasks. Visualizations further confirm its capacity to uncover actionable business insights. The core innovation lies in synergistically integrating the expressive power of kernel methods with the human-readability of sparse polynomials, while embedding rigorous multiple testing control directly into the feature learning pipeline.
📝 Abstract
In high-dimensional and high-stakes contexts, ensuring both rigorous statistical guarantees and interpretability in feature extraction from complex tabular data remains a formidable challenge. Traditional methods such as Principal Component Analysis (PCA) reduce dimensionality and identify key features that explain the most variance, but are constrained by their reliance on linear assumptions. In contrast, neural networks offer assumption-free feature extraction through self-supervised learning techniques such as autoencoders, though their interpretability remains a challenge in fields requiring transparency. To address this gap, this paper introduces Spofe, a novel self-supervised machine learning pipeline that marries the power of kernel principal components for capturing nonlinear dependencies with a sparse and principled polynomial representation to achieve clear interpretability with statistical rigor. Underpinning our approach is a robust theoretical framework that delivers precise error bounds and rigorous false discovery rate (FDR) control via a multi-objective knockoff selection procedure; it effectively bridges the gap between data-driven complexity and statistical reliability via three stages: (1) generating self-supervised signals using kernel principal components to model complex patterns, (2) distilling these signals into sparse polynomial functions for improved interpretability, and (3) applying a multi-objective knockoff selection procedure with significance testing to rigorously identify important features. Extensive experiments on diverse real-world datasets demonstrate the effectiveness of Spofe, consistently surpassing KPCA, SKPCA, and other methods in feature selection for regression and classification tasks. Visualization and case studies highlight its ability to uncover key insights, enhancing interpretability and practical utility.