π€ AI Summary
This work addresses the lack of interpretability in large language model (LLM) computations by proposing the Jacobian Sparse Autoencoder (JSAE)βthe first method to explicitly enforce sparsity on the input-output computational mapping, i.e., the Jacobian matrix, of model components. Unlike conventional sparse autoencoders (SAEs) that impose sparsity only on activations, JSAE extends sparsity constraints to gradients, revealing that intrinsic computational sparsity in LLMs is an emergent property of training and demonstrating that the Jacobian serves as an effective proxy for computational sparsity. Methodologically, JSAE integrates automatic differentiation approximation, sparsity-constrained optimization, and MLP linearization analysis. Experiments show that JSAE significantly enhances computational sparsity in pretrained LLMs while matching SAE performance on downstream tasks; the induced sparsity substantially exceeds that of random baselines and empirically validates the near-linear behavior assumption for MLP layers.
π Abstract
Sparse autoencoders (SAEs) have been successfully used to discover sparse and human-interpretable representations of the latent activations of LLMs. However, we would ultimately like to understand the computations performed by LLMs and not just their representations. The extent to which SAEs can help us understand computations is unclear because they are not designed to"sparsify"computations in any sense, only latent activations. To solve this, we propose Jacobian SAEs (JSAEs), which yield not only sparsity in the input and output activations of a given model component but also sparsity in the computation (formally, the Jacobian) connecting them. With a na""ive implementation, the Jacobians in LLMs would be computationally intractable due to their size. One key technical contribution is thus finding an efficient way of computing Jacobians in this setup. We find that JSAEs extract a relatively large degree of computational sparsity while preserving downstream LLM performance approximately as well as traditional SAEs. We also show that Jacobians are a reasonable proxy for computational sparsity because MLPs are approximately linear when rewritten in the JSAE basis. Lastly, we show that JSAEs achieve a greater degree of computational sparsity on pre-trained LLMs than on the equivalent randomized LLM. This shows that the sparsity of the computational graph appears to be a property that LLMs learn through training, and suggests that JSAEs might be more suitable for understanding learned transformer computations than standard SAEs.