🤖 AI Summary
Existing token attribution methods struggle with inefficiency and reduced faithfulness in long-context and multi-step reasoning scenarios. This work proposes FlashTrace, a novel approach that achieves efficient single-pass attribution for multi-token targets through span-level aggregation and introduces a recursive attribution mechanism to propagate importance scores back along the reasoning chain to the original input tokens. FlashTrace is the first method to simultaneously ensure high computational efficiency and attribution faithfulness, thereby overcoming the long-range interpretability bottleneck. Evaluated on benchmarks including RULER, MATH, and MorehopQA, FlashTrace achieves over 130× speedup while significantly improving attribution quality, effectively propagating importance with just a single recursive pass.
📝 Abstract
Token attribution methods provide intuitive explanations for language model outputs by identifying causally important input tokens. However, as modern LLMs increasingly rely on extended reasoning chains, existing schemes face two critical challenges: (1) efficiency bottleneck, where attributing a target span of M tokens within a context of length N requires O(M*N) operations, making long-context attribution prohibitively slow; and (2) faithfulness drop, where intermediate reasoning tokens absorb attribution mass, preventing importance from propagating back to the original input. To address these, we introduce FlashTrace, an efficient multi-token attribution method that employs span-wise aggregation to compute attribution over multi-token targets in a single pass, while maintaining faithfulness. Moreover, we design a recursive attribution mechanism that traces importance through intermediate reasoning chains back to source inputs. Extensive experiments on long-context retrieval (RULER) and multi-step reasoning (MATH, MorehopQA) tasks demonstrate that FlashTrace achieves over 130x speedup over existing baselines while maintaining superior faithfulness. We further analyze the dynamics of recursive attribution, showing that even a single recursive hop improves faithfulness by tracing importance through the reasoning chain.