🤖 AI Summary
This work investigates the impact of random noise—such as decoding errors and webpage corruption—in internet-sourced pretraining data on large language models. We find that while such noise induces only marginal perturbations to next-token prediction (NTP) loss, it significantly degrades downstream task performance, revealing a critical decoupling between NTP loss and downstream robustness. To address this, we propose the first “What-Why-How” analytical framework for systematic noise characterization and introduce a parameter-free, plug-and-play local gradient matching loss that enhances the noise resilience of downstream task heads via gradient alignment—without requiring data cleaning or model retraining. Extensive experiments across eight languages and fourteen vision tasks demonstrate consistent improvements in multilingual and multimodal downstream performance. Our approach establishes a novel paradigm for noise-robust pretraining, decoupling robustness from conventional NTP-based optimization.
📝 Abstract
Web-scale pre-training datasets are the cornerstone of LLMs' success. However, text data curated from the internet inevitably contains random noise caused by decoding errors or unregulated web content. In contrast to previous works that focus on low quality or synthetic data, our study extbf{provides the first systematic investigation into such random noise through a cohesive ``What-Why-How'' framework.} Surprisingly, we observed that the resulting increase in next-token prediction (NTP) loss was significantly lower than the proportion of random noise. We provide a theoretical justification for this phenomenon, which also elucidates the success of multilingual models. On the other hand, experiments show that the model's performance in downstream tasks is not based solely on the NTP loss, which means that random noise may result in degraded downstream performance. To address the potential adverse effects, we introduce a novel plug-and-play Local Gradient Matching loss, which explicitly enhances the denoising capability of the downstream task head by aligning the gradient of normal and perturbed features without requiring knowledge of the model's parameters. Additional experiments on 8 language and 14 vision benchmarks further validate its effectiveness.