🤖 AI Summary
Pretrained decoder-only large language models face a bottleneck due to the diminishing availability of high-quality training data, while abundant linguistic metadata—such as syntactic, semantic, and contextual information—remains underutilized as direct training signals. This paper introduces LIME, the first method to explicitly embed multidimensional linguistic metadata into token representations. It further proposes the lightweight LIME+1 mechanism, which guides metadata-aware token generation during forward propagation without significant parameter overhead, maintaining compatibility with models ranging from 500M to 2B parameters. Experiments demonstrate accelerated pretraining convergence (56% faster), improved tokenization quality, 38% higher inference efficiency, and a 35% gain in arithmetic reasoning accuracy—all consistently observed across model scales. The core contribution lies in elevating linguistic metadata from passive data management tools to learnable, embedded training signals.
📝 Abstract
Pre-training decoder-only language models relies on vast amounts of high-quality data, yet the availability of such data is increasingly reaching its limits. While metadata is commonly used to create and curate these datasets, its potential as a direct training signal remains under-explored. We challenge this status quo and propose LIME (Linguistic Metadata Embeddings), a method that enriches token embeddings with metadata capturing syntax, semantics, and contextual properties. LIME substantially improves pre-training efficiency. Specifically, it adapts up to 56% faster to the training data distribution, while introducing only 0.01% additional parameters at negligible compute overhead. Beyond efficiency, LIME improves tokenization, leading to remarkably stronger language modeling capabilities and generative task performance. These benefits persist across model scales (500M to 2B). In addition, we develop a variant with shifted metadata, LIME+1, that can guide token generation. Given prior metadata for the next token, LIME+1 improves reasoning performance by up to 38% and arithmetic accuracy by up to 35%.