ScoresActivation: A New Activation Function for Model Agnostic Global Explainability by Design

๐Ÿ“… 2025-11-17
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Addressing the challenge of simultaneously achieving global interpretability and high predictive performance in large-scale deep learning models, this paper proposes an endogenous interpretability method that embeds feature importance estimation directly into the model training process, enabling end-to-end differentiable joint optimization of feature selection and prediction. The core innovation is a model-agnostic ScoresActivation activation function, which generates high-fidelity feature scores in real time during forward propagationโ€”its ranking exhibits strong consistency with SHAP values and ground-truth importance (Pearson correlation >0.92), and scores are available throughout training. Evaluated on multiple benchmark datasets, the method improves classification accuracy by 11.24%โ€“29.33%, accelerates feature scoring by 150ร— over SHAP (completing in just 2 seconds), and significantly enhances robustness against irrelevant features. Collectively, it effectively bridges the long-standing trade-off between accuracy and interpretability.

Technology Category

Application Category

๐Ÿ“ Abstract
Understanding the decision of large deep learning models is a critical challenge for building transparent and trustworthy systems. Although the current post hoc explanation methods offer valuable insights into feature importance, they are inherently disconnected from the model training process, limiting their faithfulness and utility. In this work, we introduce a novel differentiable approach to global explainability by design, integrating feature importance estimation directly into model training. Central to our method is the ScoresActivation function, a feature-ranking mechanism embedded within the learning pipeline. This integration enables models to prioritize features according to their contribution to predictive performance in a differentiable and end-to-end trainable manner. Evaluations across benchmark datasets show that our approach yields globally faithful, stable feature rankings aligned with SHAP values and ground-truth feature importance, while maintaining high predictive performance. Moreover, feature scoring is 150 times faster than the classical SHAP method, requiring only 2 seconds during training compared to SHAP's 300 seconds for feature ranking in the same configuration. Our method also improves classification accuracy by 11.24% with 10 features (5 relevant) and 29.33% with 16 features (5 relevant, 11 irrelevant), demonstrating robustness to irrelevant inputs. This work bridges the gap between model accuracy and interpretability, offering a scalable framework for inherently explainable machine learning.
Problem

Research questions and friction points this paper is trying to address.

Develops differentiable global explainability integrated into model training
Creates feature-ranking activation function for interpretable deep learning
Addresses faithfulness and speed limitations of post-hoc explanation methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

ScoresActivation function embeds feature ranking in training
Differentiable end-to-end trainable global explainability by design
Feature scoring 150 times faster than SHAP method
๐Ÿ”Ž Similar Papers
No similar papers found.
E
Emanuel Covaci
West University of Timisoara, Romania
F
Fabian Galis
West University of Timisoara, Romania
Radu Balan
Radu Balan
Professor of Applied Mathematics, University of Maryland
Applied Harmonic Analysis
Daniela Zaharie
Daniela Zaharie
West University of Timisoara, Romania
D
Darian Onchis
West University of Timisoara, Romania