🤖 AI Summary
Existing circuit discovery methods for language models suffer from low efficiency or poor accuracy. This work reformulates circuit discovery as an edge-level differentiable optimization problem—the first such formulation—and introduces a gradient-driven edge pruning strategy, departing from conventional neuron- or module-level pruning paradigms. Our method integrates differentiable sparse optimization with circuit fidelity evaluation and validates fidelity via Tracr-compiled models. Experiments demonstrate: (1) 50% edge reduction on GPT-2 without fidelity loss; (2) exact recovery of ground-truth circuits in Tracr models; and (3) the first discovery of instruction-tuning and in-context learning circuits in CodeLlama-13B—achieving >99.96% sparsity while preserving full-model performance, revealing substantial mechanistic overlap between the two. Overall, this work enables high-fidelity, highly sparse, and scalable automated circuit discovery in large language models.
📝 Abstract
The path to interpreting a language model often proceeds via analysis of circuits -- sparse computational subgraphs of the model that capture specific aspects of its behavior. Recent work has automated the task of discovering circuits. Yet, these methods have practical limitations, as they rely either on inefficient search algorithms or inaccurate approximations. In this paper, we frame automated circuit discovery as an optimization problem and propose *Edge Pruning* as an effective and scalable solution. Edge Pruning leverages gradient-based pruning techniques, but instead of removing neurons or components, it prunes the emph{edges} between components. Our method finds circuits in GPT-2 that use less than half the number of edges compared to circuits found by previous methods while being equally faithful to the full model predictions on standard circuit-finding tasks. Edge Pruning is efficient even with as many as 100K examples, outperforming previous methods in speed and producing substantially better circuits. It also perfectly recovers the ground-truth circuits in two models compiled with Tracr. Thanks to its efficiency, we scale Edge Pruning to CodeLlama-13B, a model over 100x the scale that prior methods operate on. We use this setting for a case study comparing the mechanisms behind instruction prompting and in-context learning. We find two circuits with more than 99.96% sparsity that match the performance of the full model and reveal that the mechanisms in the two settings overlap substantially. Our case study shows that Edge Pruning is a practical and scalable tool for interpretability and sheds light on behaviors that only emerge in large models.