Linear Representation Transferability Hypothesis: Leveraging Small Models to Steer Large Models

๐Ÿ“… 2025-05-31
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work investigates the linear transferability of semantic representations across language models of differing scales. Method: We propose the Linear Representational Transferability (LRT) hypothesisโ€”that steering vectors encoding semantics in smaller models remain effective for eliciting target behaviors in larger models after undergoing an affine transformation. To operationalize this, we formally define a general affine mapping between cross-scale representation spaces and introduce a mapping learning framework grounded in hidden-state alignment and behavior-guided distillation. Experiments are conducted across the LLaMA family of models spanning multiple scales. Contribution/Results: Our approach achieves over 85% behavioral retention when transferring steering vectors from smaller to larger models on tasks including style control and factual correction, validating that small models can serve as lightweight, interpretable behavioral controllers for large models. This establishes a novel, efficient, and transparent paradigm for large-model intervention.

Technology Category

Application Category

๐Ÿ“ Abstract
It has been hypothesized that neural networks with similar architectures trained on similar data learn shared representations relevant to the learning task. We build on this idea by extending the conceptual framework where representations learned across models trained on the same data can be expressed as linear combinations of a emph{universal} set of basis features. These basis features underlie the learning task itself and remain consistent across models, regardless of scale. From this framework, we propose the extbf{Linear Representation Transferability (LRT)} Hypothesis -- that there exists an affine transformation between the representation spaces of different models. To test this hypothesis, we learn affine mappings between the hidden states of models of different sizes and evaluate whether steering vectors -- directions in hidden state space associated with specific model behaviors -- retain their semantic effect when transferred from small to large language models using the learned mappings. We find strong empirical evidence that such affine mappings can preserve steering behaviors. These findings suggest that representations learned by small models can be used to guide the behavior of large models, and that the LRT hypothesis may be a promising direction on understanding representation alignment across model scales.
Problem

Research questions and friction points this paper is trying to address.

Test affine mappings between model representation spaces
Transfer steering behaviors from small to large models
Explore linear representation transferability across model scales
Innovation

Methods, ideas, or system contributions that make the work stand out.

Affine mappings between model hidden states
Linear Representation Transferability Hypothesis
Steering vectors transfer across model scales
๐Ÿ”Ž Similar Papers
F
Femi Bello
University of Texas at Austin
Anubrata Das
Anubrata Das
University of Texas at Austin
Large Language ModelsInterpretabilityHuman Centered AIResponsible AI
Fanzhi Zeng
Fanzhi Zeng
UT Austin
Reinforcement LearningAI Alignment
F
Fangcong Yin
University of Texas at Austin
L
Leqi Liu
University of Texas at Austin