Capability Salience Vector: Fine-grained Alignment of Loss and Capabilities for Downstream Task Scaling Law

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing scaling laws rely on validation loss to predict downstream performance, yet a significant disconnect exists between these two metrics. Method: This paper introduces the Capability Salience Vector (CSV), the first learnable, token-level loss weighting framework that decomposes loss contributions in fine-grained, capability-aware manner—explicitly mapping token-level loss to model meta-capabilities (e.g., reasoning, memory) and thereby departing from the conventional uniform-loss assumption. CSV integrates gradient sensitivity analysis with task-guided token importance modeling and is jointly optimized across multi-task benchmarks. Contribution/Results: On major evaluation benchmarks, CSV improves downstream performance prediction accuracy, achieving an R² gain of over 0.32. It enables, for the first time, an interpretable and predictive transition of scaling laws—from loss-level to capability-level characterization—establishing a principled foundation for capability-aware model scaling.

Technology Category

Application Category

📝 Abstract
Scaling law builds the relationship between training computation and validation loss, enabling researchers to effectively predict the loss trending of models across different levels of computation. However, a gap still remains between validation loss and the model's downstream capabilities, making it untrivial to apply scaling law to direct performance prediction for downstream tasks. The loss typically represents a cumulative penalty for predicted tokens, which are implicitly considered to have equal importance. Nevertheless, our studies have shown evidence that when considering different training data distributions, we cannot directly model the relationship between downstream capability and computation or token loss. To bridge the gap between validation loss and downstream task capabilities, in this work, we introduce Capability Salience Vector, which decomposes the overall loss and assigns different importance weights to tokens to assess a specific meta-capability, aligning the validation loss with downstream task performance in terms of the model's capabilities. Experiments on various popular benchmarks demonstrate that our proposed Capability Salience Vector could significantly improve the predictability of language model performance on downstream tasks.
Problem

Research questions and friction points this paper is trying to address.

Bridges gap between validation loss and downstream task capabilities
Aligns loss with model capabilities for performance prediction
Improves predictability of language model downstream performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Capability Salience Vector for loss alignment
Assigns importance weights to tokens for meta-capabilities
Improves downstream task performance predictability
🔎 Similar Papers
No similar papers found.
Q
Qiming Ge
Shanghai AI Laboratory, College of Computer Science and Artificial Intelligence, Fudan University
S
Shuhao Xing
Shanghai AI Laboratory
S
Songyang Gao
Shanghai AI Laboratory
Yunhua Zhou
Yunhua Zhou
Fudan University
Machine LearningNatural Language Processing
Yicheng Zou
Yicheng Zou
Shanghai AI Laboratory
Large Language Model
S
Songyang Zhang
Shanghai AI Laboratory
Z
Zhi Chen
Shanghai AI Laboratory
H
Hang Yan
Shanghai AI Laboratory
Q
Qi Zhang
College of Computer Science and Artificial Intelligence, Fudan University
Qipeng Guo
Qipeng Guo
Fudan University
K
Kai Chen
Shanghai AI Laboratory