đ¤ AI Summary
Existing feature attribution methods lack rigorous theoretical foundations, and their empirical evaluation faces fundamental challenges. Method: We propose a first-principlesâdriven, bottom-up attribution framework: using indicator functions as atomic building blocks, attribution is recursively defined via function-space compositionâbypassing ad hoc axiomatic constraints. Contribution/Results: We derive, for the first time, a closed-form attribution expression for deep ReLU networks, enabling efficient, differentiable computation. The framework unifies mainstream methodsâincluding Integrated Gradients (IG) and DeepLIFTâunder a common theoretical lens. Furthermore, it enables attribution-driven, differentiable evaluation objectives, facilitating end-to-end optimization of attribution quality. Combining theoretical rigor with practical scalability, this framework establishes a novel paradigm for interpretable AI.
đ Abstract
Feature attribution methods are a popular approach to explain the behavior of machine learning models. They assign importance scores to each input feature, quantifying their influence on the model's prediction. However, evaluating these methods empirically remains a significant challenge. To bypass this shortcoming, several prior works have proposed axiomatic frameworks that any feature attribution method should satisfy. In this work, we argue that such axioms are often too restrictive, and propose in response a new feature attribution framework, built from the ground up. Rather than imposing axioms, we start by defining attributions for the simplest possible models, i.e., indicator functions, and use these as building blocks for more complex models. We then show that one recovers several existing attribution methods, depending on the choice of atomic attribution. Subsequently, we derive closed-form expressions for attribution of deep ReLU networks, and take a step toward the optimization of evaluation metrics with respect to feature attributions.