🤖 AI Summary
Current research on Transformer circuits lacks systematic validation of their stability across model instances, limiting their trustworthy deployment in safety-critical applications. This work presents the first systematic evaluation of representational consistency among attention heads across layers under different random initializations, employing cross-retraining experiments, similarity metrics for attention head representations, and residual stream analysis, while also investigating the impact of optimization strategies. The study reveals that middle-layer attention heads exhibit the lowest stability yet possess the most distinctive representations; paradoxically, unstable heads in deeper layers often perform more critical functions. Furthermore, weight decay significantly enhances overall stability, while the residual stream itself maintains high representational consistency. These findings offer a novel perspective on the universality and robustness of internal mechanisms within Transformer architectures.
📝 Abstract
In mechanistic interpretability, recent work scrutinizes transformer "circuits" - sparse, mono or multi layer sub computations, that may reflect human understandable functions. Yet, these network circuits are rarely acid-tested for their stability across different instances of the same deep learning architecture. Without this, it remains unclear whether reported circuits emerge universally across labs or turn out to be idiosyncratic to a particular estimation instance, potentially limiting confidence in safety-critical settings. Here, we systematically study stability across-refits in increasingly complex transformer language models of various sizes. We quantify, layer by layer, how similarly attention heads learn representations across independently initialized training runs. Our rigorous experiments show that (1) middle-layer heads are the least stable yet the most representationally distinct; (2) deeper models exhibit stronger mid-depth divergence; (3) unstable heads in deeper layers become more functionally important than their peers from the same layer; (4) applying weight decay optimization substantially improves attention-head stability across random model initializations; and (5) the residual stream is comparatively stable. Our findings establish the cross-instance robustness of circuits as an essential yet underappreciated prerequisite for scalable oversight, drawing contours around possible white-box monitorability of AI systems.