Perplexity-Aware Data Scaling Law: Perplexity Landscapes Predict Performance for Continual Pre-training

πŸ“… 2025-12-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In continual pretraining (CPT), naively scaling up data volume yields diminishing returns and suffers from low data utilization efficiency. To address this, we propose a perplexity-aware data scaling law: it quantifies the knowledge gap between the model and the target domain via perplexity on domain-specific data, and establishes a power-law relationship between perplexity landscapes and downstream test loss. Crucially, we are the first to incorporate perplexity as a principled metric of data utility into scaling laws, enabling adaptive high-value subset selection across diverse perplexity regimes. Our approach breaks the β€œdata-volume-only” paradigm and consistently identifies near-optimal training subsets on medical and general-domain benchmarks. It significantly improves model performance, generalization, and training efficiency while effectively suppressing redundancy and noise interference.

Technology Category

Application Category

πŸ“ Abstract
Continual Pre-training (CPT) serves as a fundamental approach for adapting foundation models to domain-specific applications. Scaling laws for pre-training define a power-law relationship between dataset size and the test loss of an LLM. However, the marginal gains from simply increasing data for CPT diminish rapidly, yielding suboptimal data utilization and inefficient training. To address this challenge, we propose a novel perplexity-aware data scaling law to establish a predictive relationship between the perplexity landscape of domain-specific data and the test loss. Our approach leverages the perplexity derived from the pre-trained model on domain data as a proxy for estimating the knowledge gap, effectively quantifying the informational perplexity landscape of candidate training samples. By fitting this scaling law across diverse perplexity regimes, we enable adaptive selection of high-utility data subsets, prioritizing content that maximizes knowledge absorption while minimizing redundancy and noise. Extensive experiments demonstrate that our method consistently identifies near-optimal training subsets and achieves superior performance on both medical and general-domain benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Optimizing data selection for continual pre-training of foundation models
Predicting model performance using perplexity landscapes of domain data
Maximizing knowledge absorption while minimizing redundant training samples
Innovation

Methods, ideas, or system contributions that make the work stand out.

Perplexity-aware scaling law predicts test loss
Uses perplexity landscapes to quantify knowledge gaps
Adaptively selects high-utility data subsets for training
πŸ”Ž Similar Papers
No similar papers found.