🤖 AI Summary
This study addresses the prevalent issue of embedding collapse—where embeddings in small-scale language models become highly concentrated in a narrow subspace—leading to limited representational capacity and degraded generalization. The authors systematically reveal, for the first time, the strong correlation between this phenomenon and model scale, demonstrating that smaller models are particularly prone to such collapse. To mitigate this without increasing model parameters, they propose a novel dispersion loss that explicitly encourages embedding diversity through angular or distance-based constraints. Extensive experiments across ten benchmark tasks show that this approach significantly enhances the performance of small models and effectively recovers the embedding distribution characteristics typically observed in larger models, thereby establishing a new paradigm for optimizing compact language models.
📝 Abstract
Large language models (LLMs) achieve remarkable performance through ever-increasing parameter counts, but scaling incurs steep computational costs. To better understand LLM scaling, we study representational differences between LLMs and their smaller counterparts, with the goal of replicating the representational qualities of larger models in the smaller models. We observe a geometric phenomenon which we term $\textbf{embedding condensation}$, where token embeddings collapse into a narrow cone-like subspace in some language models. Through systematic analyses across multiple Transformer families, we show that small models such as $\texttt{GPT2}$ and $\texttt{Qwen3-0.6B}$ exhibit severe condensation, whereas the larger models such as $\texttt{GPT2-xl}$ and $\texttt{Qwen3-32B}$ are more resistant to this phenomenon. Additional observations show that embedding condensation is not reliably mitigated by knowledge distillation from larger models. To fight against it, we formulate a dispersion loss that explicitly encourages embedding dispersion during training. Experiments demonstrate that it mitigates condensation, recovers dispersion patterns seen in larger models, and yields performance gains across 10 benchmarks. We believe this work offers a principled path toward improving smaller Transformers without additional parameters.