๐ค AI Summary
This work investigates the linear transferability of semantic representations across language models of differing scales. Method: We propose the Linear Representational Transferability (LRT) hypothesisโthat steering vectors encoding semantics in smaller models remain effective for eliciting target behaviors in larger models after undergoing an affine transformation. To operationalize this, we formally define a general affine mapping between cross-scale representation spaces and introduce a mapping learning framework grounded in hidden-state alignment and behavior-guided distillation. Experiments are conducted across the LLaMA family of models spanning multiple scales. Contribution/Results: Our approach achieves over 85% behavioral retention when transferring steering vectors from smaller to larger models on tasks including style control and factual correction, validating that small models can serve as lightweight, interpretable behavioral controllers for large models. This establishes a novel, efficient, and transparent paradigm for large-model intervention.
๐ Abstract
It has been hypothesized that neural networks with similar architectures trained on similar data learn shared representations relevant to the learning task. We build on this idea by extending the conceptual framework where representations learned across models trained on the same data can be expressed as linear combinations of a emph{universal} set of basis features. These basis features underlie the learning task itself and remain consistent across models, regardless of scale. From this framework, we propose the extbf{Linear Representation Transferability (LRT)} Hypothesis -- that there exists an affine transformation between the representation spaces of different models. To test this hypothesis, we learn affine mappings between the hidden states of models of different sizes and evaluate whether steering vectors -- directions in hidden state space associated with specific model behaviors -- retain their semantic effect when transferred from small to large language models using the learned mappings. We find strong empirical evidence that such affine mappings can preserve steering behaviors. These findings suggest that representations learned by small models can be used to guide the behavior of large models, and that the LRT hypothesis may be a promising direction on understanding representation alignment across model scales.