LLMs on the Line: Data Determines Loss-to-Loss Scaling Laws

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the key determinants of loss-to-loss scaling laws in large language models (LLMs). We conduct a systematic empirical analysis across diverse architectures (Transformer and Mamba), datasets, and tokenizers. Our results reveal that pretraining data distribution and tokenizer design are the dominant factors governing loss-to-loss scaling behavior, whereas model size, optimization hyperparameters, and architectural differences (e.g., Llama vs. Mamba) exert only marginal influence. To our knowledge, this is the first study to rigorously validate—within a unified experimental framework—the decisive role of data in shaping scaling laws, challenging the conventional “architecture- or scale-centric” view of LLM performance. Across multiple models and downstream tasks, we consistently observe stable linear loss-to-loss scaling relationships. Crucially, scaling laws grounded in data selection yield significantly more accurate downstream performance predictions than those relying on model configuration tuning.

Technology Category

Application Category

📝 Abstract
Scaling laws guide the development of large language models (LLMs) by offering estimates for the optimal balance of model size, tokens, and compute. More recently, loss-to-loss scaling laws that relate losses across pretraining datasets and downstream tasks have emerged as a powerful tool for understanding and improving LLM performance. In this work, we investigate which factors most strongly influence loss-to-loss scaling. Our experiments reveal that the pretraining data and tokenizer determine the scaling trend. In contrast, model size, optimization hyperparameters, and even significant architectural differences, such as between transformer-based models like Llama and state-space models like Mamba, have limited impact. Consequently, practitioners should carefully curate suitable pretraining datasets for optimal downstream performance, while architectures and other settings can be freely optimized for training efficiency.
Problem

Research questions and friction points this paper is trying to address.

Identifies key factors in loss-to-loss scaling laws
Explores pretraining data impact on LLM performance
Assesses minimal influence of model architecture differences
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pretraining data influences scaling laws
Tokenizer choice affects loss-to-loss scaling
Model architecture has limited scaling impact
🔎 Similar Papers
No similar papers found.