🤖 AI Summary
This study investigates the impact of model complexity control on the generalization and reasoning capabilities of large language models (LLMs). We address the instability of scaling laws arising from fixed initialization standard deviations in conventional approaches by proposing a novel complexity-control paradigm: replacing fixed initialization variance with a constant initialization rate, jointly optimized with weight decay coefficients. The method is broadly applicable, scalable, and implementation-friendly. We systematically evaluate it across model sizes up to 2.4B parameters and training data scales up to 1T tokens. Results demonstrate significant improvements in reasoning generalization performance and accelerated convergence of scaling laws along both model-size and data-scale dimensions. Crucially, this work is the first to establish the initialization rate—as a core complexity-control variable—as a decisive factor governing LLM scaling behavior.
📝 Abstract
The reasoning ability of large language models (LLMs) has been rapidly advancing in recent years, attracting interest in more fundamental approaches that can reliably enhance their generalizability. This work demonstrates that model complexity control, conveniently implementable by adjusting the initialization rate and weight decay coefficient, improves the scaling law of LLMs consistently over varying model sizes and data sizes. This gain is further illustrated by comparing the benchmark performance of 2.4B models pretrained on 1T tokens with different complexity hyperparameters. Instead of fixing the initialization std, we found that a constant initialization rate (the exponent of std) enables the scaling law to descend faster in both model and data sizes. These results indicate that complexity control is a promising direction for the continual advancement of LLMs.