π€ AI Summary
This work addresses the limitation of existing feature attribution methods, which are predominantly post hoc and often fail to faithfully reflect a modelβs actual decision-making process due to a lack of intrinsic alignment with predictions. To overcome this, the paper introduces Pseudo-linear Neural Networks (PiNets), a framework grounded in the principle of model interpretability, designed to produce endogenous, instance-level linear predictions in any feature space. This design ensures a natural alignment between explanation and prediction. Experiments on image classification and segmentation tasks demonstrate that PiNets not only achieve high consistency between explanations and predictions but also significantly outperform mainstream post hoc methods in terms of faithfulness. The study establishes explanation alignment as a critical requirement for trustworthy prediction.
π Abstract
Feature attribution is the dominant paradigm for explaining deep neural networks. However, most existing methods only loosely reflect the model's prediction-making process, thereby merely white-painting the black box. We argue that explanatory alignment is a key aspect of trustworthiness in prediction tasks: explanations must be directly linked to predictions, rather than serving as post-hoc rationalizations. We present model readability as a design principle enabling alignment, and PiNets as a modeling framework to pursue it in a deep learning context. PiNets are pseudo-linear networks that produce instance-wise linear predictions in an arbitrary feature space, making them linearly readable. We illustrate their use on image classification and segmentation tasks, demonstrating how PiNets produce explanations that are faithful across multiple criteria in addition to alignment.