🤖 AI Summary
This study investigates whether and how noisy data causes training loss divergence during pretraining of large language models, along with the underlying mechanisms. By injecting controlled synthetic uniform random noise into clean corpora and leveraging multi-scale Transformer models ranging from 480M to 5.2B parameters, the work systematically examines the effects of noise type, proportion, and model scale on divergence behavior. It establishes, for the first time, a causal link between data noise and loss divergence, introduces a diagnostic method to distinguish noise-induced divergence from that caused by high learning rates, and uncovers distinctive activation patterns characteristic of noise-driven divergence. The findings demonstrate that divergence likelihood strongly depends on both noise properties and model size, offering theoretical insights and practical guidance for robust pretraining.
📝 Abstract
Large-scale pretraining datasets drive the success of large language models (LLMs). However, these web-scale corpora inevitably contain large amounts of noisy data due to unregulated web content or randomness inherent in data. Although LLM pretrainers often speculate that such noise contributes to instabilities in large-scale LLM pretraining and, in the worst cases, loss divergence, this phenomenon remains poorly understood.In this work, we present a systematic empirical study of whether noisy data causes LLM pretraining divergences and how it does so. By injecting controlled synthetic uniformly random noise into otherwise clean datasets, we analyze training dynamics across model sizes ranging from 480M to 5.2B parameters. We show that noisy data indeed induces training loss divergence, and that the probability of divergence depends strongly on the noise type, amount of noise, and model scale. We further find that noise-induced divergences exhibit activation patterns distinct from those caused by high learning rates, and we provide diagnostics that differentiate these two failure modes. Together, these results provide a large-scale, controlled characterization of how noisy data affects loss divergence in LLM pretraining.