Aligned explanations in neural networks

πŸ“… 2026-01-07
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitation of existing feature attribution methods, which are predominantly post hoc and often fail to faithfully reflect a model’s actual decision-making process due to a lack of intrinsic alignment with predictions. To overcome this, the paper introduces Pseudo-linear Neural Networks (PiNets), a framework grounded in the principle of model interpretability, designed to produce endogenous, instance-level linear predictions in any feature space. This design ensures a natural alignment between explanation and prediction. Experiments on image classification and segmentation tasks demonstrate that PiNets not only achieve high consistency between explanations and predictions but also significantly outperform mainstream post hoc methods in terms of faithfulness. The study establishes explanation alignment as a critical requirement for trustworthy prediction.

Technology Category

Application Category

πŸ“ Abstract
Feature attribution is the dominant paradigm for explaining deep neural networks. However, most existing methods only loosely reflect the model's prediction-making process, thereby merely white-painting the black box. We argue that explanatory alignment is a key aspect of trustworthiness in prediction tasks: explanations must be directly linked to predictions, rather than serving as post-hoc rationalizations. We present model readability as a design principle enabling alignment, and PiNets as a modeling framework to pursue it in a deep learning context. PiNets are pseudo-linear networks that produce instance-wise linear predictions in an arbitrary feature space, making them linearly readable. We illustrate their use on image classification and segmentation tasks, demonstrating how PiNets produce explanations that are faithful across multiple criteria in addition to alignment.
Problem

Research questions and friction points this paper is trying to address.

explanatory alignment
feature attribution
model readability
trustworthy explanations
neural network interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

explanatory alignment
model readability
PiNets
feature attribution
faithful explanations
πŸ”Ž Similar Papers
No similar papers found.