🤖 AI Summary
Existing scaling laws rely on validation loss to predict downstream performance, yet a significant disconnect exists between these two metrics. Method: This paper introduces the Capability Salience Vector (CSV), the first learnable, token-level loss weighting framework that decomposes loss contributions in fine-grained, capability-aware manner—explicitly mapping token-level loss to model meta-capabilities (e.g., reasoning, memory) and thereby departing from the conventional uniform-loss assumption. CSV integrates gradient sensitivity analysis with task-guided token importance modeling and is jointly optimized across multi-task benchmarks. Contribution/Results: On major evaluation benchmarks, CSV improves downstream performance prediction accuracy, achieving an R² gain of over 0.32. It enables, for the first time, an interpretable and predictive transition of scaling laws—from loss-level to capability-level characterization—establishing a principled foundation for capability-aware model scaling.
📝 Abstract
Scaling law builds the relationship between training computation and validation loss, enabling researchers to effectively predict the loss trending of models across different levels of computation. However, a gap still remains between validation loss and the model's downstream capabilities, making it untrivial to apply scaling law to direct performance prediction for downstream tasks. The loss typically represents a cumulative penalty for predicted tokens, which are implicitly considered to have equal importance. Nevertheless, our studies have shown evidence that when considering different training data distributions, we cannot directly model the relationship between downstream capability and computation or token loss. To bridge the gap between validation loss and downstream task capabilities, in this work, we introduce Capability Salience Vector, which decomposes the overall loss and assigns different importance weights to tokens to assess a specific meta-capability, aligning the validation loss with downstream task performance in terms of the model's capabilities. Experiments on various popular benchmarks demonstrate that our proposed Capability Salience Vector could significantly improve the predictability of language model performance on downstream tasks.