🤖 AI Summary
Existing attribution-based parameter decomposition (APD) methods suffer from high computational cost, sensitivity to hyperparameters, and poor scalability. This paper proposes Sparse-Vector Decomposition (SVD), a scalable and robust linear parameter decomposition paradigm: it represents network weights as a sum of sparsely activated basis vectors, integrating causal mediation analysis with stochastic optimization to circumvent explicit gradient attribution and parameter shrinkage—thereby significantly reducing computational complexity. SVD is insensitive to hyperparameter choices and enables efficient decomposition of models with up to tens of billions of parameters. On synthetic benchmarks, it precisely recovers ground-truth generative mechanisms; on real-world models, it uncovers interpretable functional substructures. The implementation, along with standardized evaluation benchmarks, is publicly released.
📝 Abstract
A key step in reverse engineering neural networks is to decompose them into simpler parts that can be studied in relative isolation. Linear parameter decomposition -- a framework that has been proposed to resolve several issues with current decomposition methods -- decomposes neural network parameters into a sum of sparsely used vectors in parameter space. However, the current main method in this framework, Attribution-based Parameter Decomposition (APD), is impractical on account of its computational cost and sensitivity to hyperparameters. In this work, we introduce extit{Stochastic Parameter Decomposition} (SPD), a method that is more scalable and robust to hyperparameters than APD, which we demonstrate by decomposing models that are slightly larger and more complex than was possible to decompose with APD. We also show that SPD avoids other issues, such as shrinkage of the learned parameters, and better identifies ground truth mechanisms in toy models. By bridging causal mediation analysis and network decomposition methods, this demonstration opens up new research possibilities in mechanistic interpretability by removing barriers to scaling linear parameter decomposition methods to larger models. We release a library for running SPD and reproducing our experiments at https://github.com/goodfire-ai/spd.