🤖 AI Summary
This work addresses a critical limitation in existing representation probing methods, which often misidentify spurious representations in untrained models or impose unwarranted geometric constraints on learned features. To overcome this, the authors propose a novel paradigm that treats representations not as static activation patterns but as dynamic channels of learning-induced signal propagation. Specifically, they adversarially fine-tune a language model using perturbations applied to a single sample and track how these perturbations propagate across other samples. By using perturbation propagation—rather than geometric assumptions—as the criterion for representational validity, their approach effectively distinguishes genuine representations in trained models from artifacts in untrained ones. Empirical results reveal systematic cross-sample transfer effects across multiple levels of linguistic structure, suggesting that models spontaneously acquire abstract linguistic knowledge and generalize it along representational dimensions.
📝 Abstract
Linguistic representation learning in deep neural language models (LMs) has been studied for decades, for both practical and theoretical reasons. However, finding representations in LMs remains an unsolved problem, in part due to a dilemma between enforcing implausible constraints on representations (e.g., linearity; Arora et al. 2024) and trivializing the notion of representation altogether (Sutter et al., 2025). Here we escape this dilemma by reconceptualizing representations not as patterns of activation but as conduits for learning. Our approach is simple: we perturb an LM by fine-tuning it on a single adversarial example and measure how this perturbation ``infects'' other examples. Perturbation makes no geometric assumptions, and unlike other methods, it does not find representations where it should not (e.g., in untrained LMs). But in trained LMs, perturbation reveals structured transfer at multiple linguistic grain sizes, suggesting that LMs both generalize along representational lines and acquire linguistic abstractions from experience alone.