🤖 AI Summary
This work proposes a “patterning” approach that reframes interpretability as the active shaping of a neural network’s internal structure and generalization behavior through deliberate training data design. Grounded in linear response theory, the method employs susceptibilities to quantify model sensitivity to data perturbations and inversely solves for optimal data intervention strategies—such as reweighting and localized learning coefficient optimization—to directionally control the formation of inductive circuits. Experiments on small language models demonstrate that this technique can effectively accelerate or delay the emergence of specific circuits and, in the context of a bracket balancing task, guide the model to learn a prescribed algorithm. This represents the first demonstration of actively writing and regulating internal model structures through targeted data interventions.
📝 Abstract
Mechanistic interpretability aims to understand how neural networks generalize beyond their training data by reverse-engineering their internal structures. We introduce patterning as the dual problem: given a desired form of generalization, determine what training data produces it. Our approach is based on susceptibilities, which measure how posterior expectation values of observables respond to infinitesimal shifts in the data distribution. Inverting this linear response relationship yields the data intervention that steers the model toward a target internal configuration. We demonstrate patterning in a small language model, showing that re-weighting training data along principal susceptibility directions can accelerate or delay the formation of structure, such as the induction circuit. In a synthetic parentheses balancing task where multiple algorithms achieve perfect training accuracy, we show that patterning can select which algorithm the model learns by targeting the local learning coefficient of each solution. These results establish that the same mathematical framework used to read internal structure can be inverted to write it.