🤖 AI Summary
This work addresses the challenge of quantifying input feature importance in deep neural networks. We propose a training-embedded spectral reparameterization method that directly employs the eigenvalues associated with input nodes as robust proxies for feature relevance, enabling simultaneous feature importance estimation and model training—without post-hoc analysis or auxiliary supervision. Our key contribution is the first use of input-node eigenvalue sensitivity in spectral neural networks to characterize relative feature importance, coupled with spectral reparameterization during optimization to ensure numerical stability. Experiments on both synthetic and real-world datasets demonstrate that the method significantly improves feature selection efficiency and model interpretability while strictly preserving predictive accuracy—achieving zero performance degradation.
📝 Abstract
In machine learning practice it is often useful to identify relevant input features, so as to obtain compact dataset for more efficient numerical handling. On the other hand, by isolating key input elements, ranked according their respective degree of relevance, can help to elaborate on the process of decision making. Here, we propose a novel method to estimate the relative importance of the input components for a Deep Neural Network. This is achieved by leveraging on a spectral re-parametrization of the optimization process. Eigenvalues associated to input nodes provide in fact a robust proxy to gauge the relevance of the supplied entry features. Notably, the spectral features ranking is performed automatically, as a byproduct of the network training, with no additional processing to be carried out. The technique is successfully challenged against both synthetic and real data.