🤖 AI Summary
Large language models (LLMs) trained on static historical corpora rapidly become outdated, necessitating continual learning mechanisms that balance assimilation of new knowledge with retention of prior knowledge. Method: We introduce the first large-scale, temporally contiguous pretraining benchmark spanning 114 Common Crawl snapshots and propose a temporal hierarchical evaluation framework. Our approach features an autoregressive meta-scheduling policy and cross-temporal slice data replay, coupled with a multi-source domain evaluation protocol (Common Crawl, Wikipedia, StackExchange, code documentation). Contribution/Results: We empirically demonstrate that general web data requires fixed-ratio replay to mitigate catastrophic forgetting, whereas domain-specific corpora (e.g., Wikipedia, code) exhibit lower replay dependency. On Common Crawl, our method achieves held-out loss comparable to full retraining at 2.6× computational efficiency, substantially enhancing practicality and scalability of temporally continuous LLM pretraining.
📝 Abstract
Large Language Models (LLMs) trained on historical web data inevitably become outdated. We investigate evaluation strategies and update methods for LLMs as new data becomes available. We introduce a web-scale dataset for time-continual pretraining of LLMs derived from 114 dumps of Common Crawl (CC) - orders of magnitude larger than previous continual language modeling benchmarks. We also design time-stratified evaluations across both general CC data and specific domains (Wikipedia, StackExchange, and code documentation) to assess how well various continual learning methods adapt to new data while retaining past knowledge. Our findings demonstrate that, on general CC data, autoregressive meta-schedules combined with a fixed-ratio replay of older data can achieve comparable held-out loss to re-training from scratch, while requiring significantly less computation (2.6x). However, the optimal balance between incorporating new data and replaying old data differs as replay is crucial to avoid forgetting on generic web data but less so on specific domains.