🤖 AI Summary
This work addresses a systematic bias in the Chinchilla Approach 2 when fitting neural scaling laws, which stems from its parabolic approximation and leads to suboptimal allocation of computational resources. The authors identify three primary sources of error and propose an improved Approach 3 that leverages the partial linear structure of the loss function through Variable Projection. This reduces the original five-dimensional parameter optimization problem to a two-dimensional one, enabling efficient and stable dense or exhaustive search for unbiased parameter estimation. Applied to Llama 3 data, the method quantifies approximately 6.5% wasted training compute under the original approach—equivalent to about $1.4 million—and predicts even more pronounced losses in multimodal settings.
📝 Abstract
Chinchilla Approach 2 is among the most widely used methods for fitting neural scaling laws. Its parabolic approximation introduces systematic biases in compute-optimal allocation estimates, even on noise-free synthetic data. Applied to published Llama 3 IsoFLOP data at open frontier compute scales, these biases imply a parameter underallocation corresponding to 6.5% of the $3.8\times10^{25}$ FLOP training budget and \$1.4M (90% CI: \$412K-\$2.9M) in unnecessary compute at 50% H100 MFU. Simulated multimodal model misallocations show even greater opportunity costs due to higher loss surface asymmetry. Three sources of this error are examined: IsoFLOP sampling grid width (Taylor approximation accuracy), uncentered IsoFLOP sampling, and loss surface asymmetry ($α\neq β$). Chinchilla Approach 3 largely eliminates these biases but is often regarded as less data-efficient, numerically unstable, prone to local minima, and harder to implement. Each concern is shown to be unfounded or addressable, especially when the partially linear structure of the objective is exploited via Variable Projection, enabling unbiased inference on all five loss surface parameters through a two-dimensional optimization that is well-conditioned, analytically differentiable, and amenable to dense, or even exhaustive, grid search. It may serve as a more convenient replacement for Approach 2 or a more scalable alternative for adaptations of Approach 3 to richer scaling law formulations.