Emergence of a High-Dimensional Abstraction Phase in Language Transformers

📅 2024-05-24
🏛️ arXiv.org
📈 Citations: 7
Influential: 3
📄 PDF
🤖 AI Summary
This study investigates the emergence timing of the High-Dimensional Abstraction Phase (HDAP) in language models and its relationship with modeling capability and transfer performance. Using intrinsic dimensionality estimation and cross-model representational similarity analysis (via CKA and RSAP), we conduct systematic experiments across five Transformer-based language models and three benchmark datasets. We first identify and formally define HDAP as a universal phase characterized by complete semantic abstraction of input tokens, yielding representations with strong downstream transferability and cross-model predictability. Early emergence of HDAP strongly predicts lower perplexity (PPL), and layers exhibiting HDAP consistently achieve optimal performance in both cross-task and cross-model evaluation settings. Our work provides a key geometric perspective—and quantifiable metrics—for understanding the internal linguistic processing mechanisms of large language models.

Technology Category

Application Category

📝 Abstract
A language model (LM) is a mapping from a linguistic context to an output token. However, much remains to be known about this mapping, including how its geometric properties relate to its function. We take a high-level geometric approach to its analysis, observing, across five pre-trained transformer-based LMs and three input datasets, a distinct phase characterized by high intrinsic dimensionality. During this phase, representations (1) correspond to the first full linguistic abstraction of the input; (2) are the first to viably transfer to downstream tasks; (3) predict each other across different LMs. Moreover, we find that an earlier onset of the phase strongly predicts better language modelling performance. In short, our results suggest that a central high-dimensionality phase underlies core linguistic processing in many common LM architectures.
Problem

Research questions and friction points this paper is trying to address.

Multi-dimensional Thinking
Language Models
Pre-training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Advanced Multidimensional Thinking
Pre-trained Model Optimization
Knowledge Transfer in Language Processing
🔎 Similar Papers