🤖 AI Summary
Existing methods identify global capability circuits in language models but struggle to explain model behavior on **specific inputs**. This work introduces **query circuits**—a local interpretability framework that traces internal information flow for individual queries. Methodologically, we propose the Normalized Deviation Faithfulness (NDF) metric to quantify circuit fidelity and integrate it with an efficient sampling strategy to locate sparse subnetworks within Transformers. Our key contribution is the first **local, faithful, and scalable** attribution method tailored to concrete queries. Experiments across IOI, arithmetic reasoning, MMLU, and ARC benchmarks demonstrate that query circuits recover approximately 60% of MMLU accuracy using only 1.3% of model connections—validating both high sparsity and strong explanatory power.
📝 Abstract
Explaining why a language model produces a particular output requires local, input-level explanations. Existing methods uncover global capability circuits (e.g., indirect object identification), but not why the model answers a specific input query in a particular way. We introduce query circuits, which directly trace the information flow inside a model that maps a specific input to the output. Unlike surrogate-based approaches (e.g., sparse autoencoders), query circuits are identified within the model itself, resulting in more faithful and computationally accessible explanations. To make query circuits practical, we address two challenges. First, we introduce Normalized Deviation Faithfulness (NDF), a robust metric to evaluate how well a discovered circuit recovers the model's decision for a specific input, and is broadly applicable to circuit discovery beyond our setting. Second, we develop sampling-based methods to efficiently identify circuits that are sparse yet faithfully describe the model's behavior. Across benchmarks (IOI, arithmetic, MMLU, and ARC), we find that there exist extremely sparse query circuits within the model that can recover much of its performance on single queries. For example, a circuit covering only 1.3% of model connections can recover about 60% of performance on an MMLU questions. Overall, query circuits provide a step towards faithful, scalable explanations of how language models process individual inputs.