🤖 AI Summary
This study investigates how instruction tuning and long chain-of-thought distillation structurally reshape the parameter space of main linear layers in large language models (LLMs).
Method: We conduct systematic singular value decomposition (SVD) analyses on critical layers of mainstream LLMs, characterizing post-training as a reparameterization within the pretrained parameter subspace—not via singular value scaling, but primarily through coordinated orthogonal rotations of left and right singular vectors.
Contribution/Results: We identify rotation consistency—the preservation of orthogonal relationships among singular vectors—as a fundamental regularity enabling functional transfer, challenging the “black-box” view of LLM parameters. Empirical validation shows that artificially disrupting orthogonality causes catastrophic performance degradation, confirming its necessity. Our work establishes subspace reparameterization as a novel, interpretable, and empirically verifiable framework for understanding LLM post-training, with singular-vector rotation—not singular-value modulation—as the dominant structural mechanism.
📝 Abstract
Post-training fundamentally alters the behavior of large language models (LLMs), yet its impact on the internal parameter space remains poorly understood. In this work, we conduct a systematic singular value decomposition (SVD) analysis of principal linear layers in pretrained LLMs, focusing on two widely adopted post-training methods: instruction tuning and long-chain-of-thought (Long-CoT) distillation. Our analysis reveals two consistent and unexpected structural changes:(1) a near-uniform geometric scaling of singular values across layers, which theoretically modulates attention scores; and (2) highly consistent orthogonal transformations are applied to the left and right singular vectors of each matrix. Disrupting this orthogonal consistency leads to catastrophic performance degradation. Based on these findings, we propose a simple yet effective framework that interprets post-training as a reparameterization of fixed subspaces in the pretrained parameter space. Further experiments reveal that singular value scaling behaves as a secondary effect, analogous to a temperature adjustment, whereas the core functional transformation lies in the coordinated rotation of singular vectors. These results challenge the prevailing view of the parameter space in large models as a black box, uncovering the first clear regularities in how parameters evolve during training, and providing a new perspective for deeper investigation into model parameter changes.