Compute Optimal Scaling of Skills: Knowledge vs Reasoning

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether compute-optimal scaling laws for distinct capabilities—knowledge-based question answering versus code generation—exhibit capability dependence, and whether such dependence stems from biases in pretraining data mixture. Method: We construct a multi-capability evaluation benchmark, conduct controlled ablation experiments varying data mixture distributions, and perform compute-optimal training analysis under rigorously constrained data composition. Results: We provide the first empirical evidence that these two capabilities follow significantly divergent scaling laws—even when data composition is strictly controlled—demonstrating intrinsic capability dependence in scaling behavior. Furthermore, we show that validation-set capability distribution bias can lead to up to 50% error in estimating the optimal model size. These findings establish capability dependence as a fundamental property of neural scaling laws, offering both theoretical grounding and practical guidance for capability-aware model design and resource allocation.

Technology Category

Application Category

📝 Abstract
Scaling laws are a critical component of the LLM development pipeline, most famously as a way to forecast training decisions such as 'compute-optimally' trading-off parameter count and dataset size, alongside a more recent growing list of other crucial decisions. In this work, we ask whether compute-optimal scaling behaviour can be skill-dependent. In particular, we examine knowledge and reasoning-based skills such as knowledge-based QA and code generation, and we answer this question in the affirmative: $ extbf{scaling laws are skill-dependent}$. Next, to understand whether skill-dependent scaling is an artefact of the pretraining datamix, we conduct an extensive ablation of different datamixes and find that, also when correcting for datamix differences, $ extbf{knowledge and code exhibit fundamental differences in scaling behaviour}$. We conclude with an analysis of how our findings relate to standard compute-optimal scaling using a validation set, and find that $ extbf{a misspecified validation set can impact compute-optimal parameter count by nearly 50%,}$ depending on its skill composition.
Problem

Research questions and friction points this paper is trying to address.

Examine if compute-optimal scaling varies by skill type.
Investigate scaling differences between knowledge and reasoning skills.
Analyze impact of validation set skill composition on scaling.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Skill-dependent scaling laws for LLMs
Datamix ablation reveals scaling differences
Validation set impacts compute-optimal parameters
🔎 Similar Papers
No similar papers found.