Towards Long-Horizon Interpretability: Efficient and Faithful Multi-Token Attribution for Reasoning LLMs

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing token attribution methods struggle with inefficiency and reduced faithfulness in long-context and multi-step reasoning scenarios. This work proposes FlashTrace, a novel approach that achieves efficient single-pass attribution for multi-token targets through span-level aggregation and introduces a recursive attribution mechanism to propagate importance scores back along the reasoning chain to the original input tokens. FlashTrace is the first method to simultaneously ensure high computational efficiency and attribution faithfulness, thereby overcoming the long-range interpretability bottleneck. Evaluated on benchmarks including RULER, MATH, and MorehopQA, FlashTrace achieves over 130× speedup while significantly improving attribution quality, effectively propagating importance with just a single recursive pass.

Technology Category

Application Category

📝 Abstract
Token attribution methods provide intuitive explanations for language model outputs by identifying causally important input tokens. However, as modern LLMs increasingly rely on extended reasoning chains, existing schemes face two critical challenges: (1) efficiency bottleneck, where attributing a target span of M tokens within a context of length N requires O(M*N) operations, making long-context attribution prohibitively slow; and (2) faithfulness drop, where intermediate reasoning tokens absorb attribution mass, preventing importance from propagating back to the original input. To address these, we introduce FlashTrace, an efficient multi-token attribution method that employs span-wise aggregation to compute attribution over multi-token targets in a single pass, while maintaining faithfulness. Moreover, we design a recursive attribution mechanism that traces importance through intermediate reasoning chains back to source inputs. Extensive experiments on long-context retrieval (RULER) and multi-step reasoning (MATH, MorehopQA) tasks demonstrate that FlashTrace achieves over 130x speedup over existing baselines while maintaining superior faithfulness. We further analyze the dynamics of recursive attribution, showing that even a single recursive hop improves faithfulness by tracing importance through the reasoning chain.
Problem

Research questions and friction points this paper is trying to address.

token attribution
long-horizon reasoning
efficiency bottleneck
faithfulness
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-token attribution
long-horizon interpretability
recursive attribution
faithfulness
efficient explanation
W
Wenbo Pan
Department of Computer Science, City University of Hong Kong, Hong Kong SAR, China
Z
Zhichao Liu
Harbin Institute of Technology, Harbin, China
Xianlong Wang
Xianlong Wang
Ph.D. student, City University of Hong Kong
Trustworthy LLM/VLMEmbodied AIUnlearnable Example3D Point CloudPoisoning/Adversarial Attack
H
Haining Yu
Harbin Institute of Technology, Harbin, China
Xiaohua Jia
Xiaohua Jia
Chinese Academy of Science