🤖 AI Summary
This study investigates whether computation in large language models (LLMs) is uniformly distributed across parameters, challenging the conventional assumption of sparsity. To this end, we propose the first mechanism-based interpretability method for estimating computational density, enabling systematic quantification of dynamic computational density in Transformer models under varying inputs. Our experiments reveal that LLMs predominantly employ dense computation, with computational density dynamically shifting between sparse and dense regimes depending on input content. Notably, the same input elicits highly consistent density responses across different models. Furthermore, predicting rare words demands higher computational density, whereas longer contexts generally reduce it. This work offers a novel perspective and quantitative tools for understanding the internal computational mechanisms of LLMs.
📝 Abstract
Transformer-based large language models (LLMs) are comprised of billions of parameters arranged in deep and wide computational graphs. Several studies on LLM efficiency optimization argue that it is possible to prune a significant portion of the parameters, while only marginally impacting performance. This suggests that the computation is not uniformly distributed across the parameters. We introduce here a technique to systematically quantify computation density in LLMs. In particular, we design a density estimator drawing on mechanistic interpretability. We experimentally test our estimator and find that: (1) contrary to what has been often assumed, LLM processing generally involves dense computation; (2) computation density is dynamic, in the sense that models shift between sparse and dense processing regimes depending on the input; (3) per-input density is significantly correlated across LLMs, suggesting that the same inputs trigger either low or high density. Investigating the factors influencing density, we observe that predicting rarer tokens requires higher density, and increasing context length often decreases the density. We believe that our computation density estimator will contribute to a better understanding of the processing at work in LLMs, challenging their symbolic interpretation.