π€ AI Summary
Existing methods for interpreting large language models (LLMs) suffer from functional distortion and incomplete coverage when extracting explanatory subnetworks (βcircuitsβ) from neural language models. Method: We propose the first differentiable computational graph pruning framework that unifies weight and connection-edge pruning, enabling end-to-end optimization, structured sparsity regularization, and joint weight-topology pruning to co-compress model parameters and computation paths. Contribution/Results: The extracted circuits are functionally complete, highly sparse (<5% active parameters), and self-contained; they retain state-of-the-art task performance in isolation (demonstrating SOTA faithfulness), while their complement models exhibit significant performance degradation (indicating high completeness). This approach provides a scalable, verifiable tool for mechanistic interpretability in generative AI, overcoming key limitations of prior circuit-extraction techniques.
π Abstract
In this paper, we introduce a comprehensive reformulation of the task known as Circuit Discovery, along with DiscoGP, a novel and effective algorithm based on differentiable masking for discovering circuits. Circuit discovery is the task of interpreting the computational mechanisms of language models (LMs) by dissecting their functions and capabilities into sparse subnetworks (circuits). We identified two major limitations in existing circuit discovery efforts: (1) a dichotomy between weight-based and connection-edge-based approaches forces researchers to choose between pruning connections or weights, thereby limiting the scope of mechanistic interpretation of LMs; (2) algorithms based on activation patching tend to identify circuits that are neither functionally faithful nor complete. The performance of these identified circuits is substantially reduced, often resulting in near-random performance in isolation. Furthermore, the complement of the circuit -- i.e., the original LM with the identified circuit removed -- still retains adequate performance, indicating that essential components of a complete circuits are missed by existing methods. DiscoGP successfully addresses the two aforementioned issues and demonstrates state-of-the-art faithfulness, completeness, and sparsity. The effectiveness of the algorithm and its novel structure open up new avenues of gathering new insights into the internal workings of generative AI.