🤖 AI Summary
This study investigates how input information propagates through the embedding space in Transformers and how minimal token perturbations affect this process. To address this, we propose a fine-grained analytical framework that jointly applies targeted token perturbations and precise embedding displacement tracking, enabling layer-wise characterization of both information mixing and perturbation propagation. Our analysis reveals that rare tokens induce disproportionately large embedding shifts, and that representational entanglement increases rapidly with network depth. Crucially, embedding changes in early layers exhibit high sensitivity and strong semantic interpretability—making them effective, lightweight surrogates for model explanation. This work provides a novel perspective on Transformer internal representation dynamics and, for the first time, systematically validates the hypothesis that “the first few layers alone suffice for effective interpretation.” It thereby contributes both theoretical foundations and practical tools for explainable AI.
📝 Abstract
Understanding how information propagates through Transformer models is a key challenge for interpretability. In this work, we study the effects of minimal token perturbations on the embedding space. In our experiments, we analyze the frequency of which tokens yield to minimal shifts, highlighting that rare tokens usually lead to larger shifts. Moreover, we study how perturbations propagate across layers, demonstrating that input information is increasingly intermixed in deeper layers. Our findings validate the common assumption that the first layers of a model can be used as proxies for model explanations. Overall, this work introduces the combination of token perturbations and shifts on the embedding space as a powerful tool for model interpretability.