🤖 AI Summary
The “black-box” nature of large language models hinders rigorous performance verification. Method: We propose the first framework unifying mechanistic interpretability with formal performance verification: via weight-level mechanistic reverse-engineering, we decompose small-scale Transformer behavior on Max-of-K tasks into human-understandable algorithms and generate compact, machine-verifiable mathematical proofs (e.g., accuracy lower bounds). Contribution/Results: This establishes the first end-to-end closed loop from mechanistic understanding to formal proof. We discover that proof length positively correlates with both mechanistic insight depth and bound tightness, and identify “structural deficiency errors”—gaps between inferred mechanisms and true computational structure—as the key bottleneck limiting proof conciseness and fidelity. Validated across 151 random seeds and 4 values of K, our framework constructs 102 distinct strategies; empirical results confirm that shorter proofs reflect deeper mechanistic understanding, while higher-fidelity interpretations yield tighter performance bounds.
📝 Abstract
We propose using mechanistic interpretability -- techniques for reverse engineering model weights into human-interpretable algorithms -- to derive and compactly prove formal guarantees on model performance. We prototype this approach by formally proving accuracy lower bounds for a small transformer trained on Max-of-K, validating proof transferability across 151 random seeds and four values of K. We create 102 different computer-assisted proof strategies and assess their length and tightness of bound on each of our models. Using quantitative metrics, we find that shorter proofs seem to require and provide more mechanistic understanding. Moreover, we find that more faithful mechanistic understanding leads to tighter performance bounds. We confirm these connections by qualitatively examining a subset of our proofs. Finally, we identify compounding structureless errors as a key challenge for using mechanistic interpretability to generate compact proofs on model performance.