🤖 AI Summary
This study addresses the challenge of modeling large language model (LLM) scaling laws—traditionally hindered by high-cost empirical trial-and-error. We propose the first inverse-problem framework dedicated to discovering LLM scaling laws, which systematically infers quantitative relationships among model size, computational cost, and downstream task performance by inverting observed performance–resource consumption data from large-scale pretraining. Unlike conventional empirical curve fitting, our approach formulates scaling-law discovery as an interpretable, verifiable mathematical inverse problem, enabling a paradigm shift from trial-and-error-driven design to principle-driven, law-guided development. The resulting modeling paradigm substantially reduces empirical design overhead while preserving predictive accuracy and improving cost-performance efficiency. It provides both theoretical foundations and practical tools for the efficient construction of LLMs targeting specific performance objectives.
📝 Abstract
Large Language Models (LLMs) are large-scale pretrained models that have achieved remarkable success across diverse domains. These successes have been driven by unprecedented complexity and scale in both data and computations. However, due to the high costs of training such models, brute-force trial-and-error approaches to improve LLMs are not feasible. Inspired by the success of inverse problems in uncovering fundamental scientific laws, this position paper advocates that inverse problems can also efficiently uncover scaling laws that guide the building of LLMs to achieve the desirable performance with significantly better cost-effectiveness.