When LRP Diverges from Leave-One-Out in Transformers

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Layer-wise Relevance Propagation (LRP) is widely used for interpretability in Transformers, yet its theoretical alignment with Leave-One-Out (LOO) feature importance—considered a gold standard—is poorly understood. We identify two critical flaws: (1) AttnLRP’s bilinear propagation rule violates implementation invariance, and (2) CP-LRP introduces systematic error in relevance redistribution through the softmax layer—jointly causing substantial divergence from LOO, especially in middle-to-late layers. Method: We propose a softmax-bypassing LRP variant that directly backpropagates relevance without softmax-induced distortion, grounded in linear attention modeling for rigorous theoretical analysis and empirical validation. Contribution/Results: Our method significantly improves layer-wise agreement between LRP and LOO across all Transformer layers. It quantifies the detrimental impact of softmax-related propagation error and bilinear factor sensitivity on attribution fidelity, thereby establishing formal theoretical foundations and actionable design principles for trustworthy, theoretically grounded explanation methods.

Technology Category

Application Category

📝 Abstract
Leave-One-Out (LOO) provides an intuitive measure of feature importance but is computationally prohibitive. While Layer-Wise Relevance Propagation (LRP) offers a potentially efficient alternative, its axiomatic soundness in modern Transformers remains largely under-examined. In this work, we first show that the bilinear propagation rules used in recent advances of AttnLRP violate the implementation invariance axiom. We prove this analytically and confirm it empirically in linear attention layers. Second, we also revisit CP-LRP as a diagnostic baseline and find that bypassing relevance propagation through the softmax layer -- backpropagating relevance only through the value matrices -- significantly improves alignment with LOO, particularly in middle-to-late Transformer layers. Overall, our results suggest that (i) bilinear factorization sensitivity and (ii) softmax propagation error potentially jointly undermine LRP's ability to approximate LOO in Transformers.
Problem

Research questions and friction points this paper is trying to address.

LRP violates implementation invariance in Transformer attention layers
Softmax propagation error reduces LRP alignment with Leave-One-Out
Bilinear factorization sensitivity undermines LRP approximation of LOO
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bilinear propagation rules violate implementation invariance axiom
Bypassing relevance propagation through softmax layer improves alignment
Backpropagating relevance only through value matrices enhances LOO approximation
🔎 Similar Papers
No similar papers found.