🤖 AI Summary
The proliferation of heterogeneous architectures, diverse training strategies, and rapidly expanding evaluation benchmarks has rendered conventional universal scaling laws inadequate for accurately characterizing cross-model-family and cross-benchmark performance trends in large language models (LLMs).
Method: This paper proposes a latent-variable-based statistical modeling framework that jointly captures family-level commonalities (via latent variables) and model-specific idiosyncrasies (via observable features), thereby overcoming the limitations of monolithic scaling laws and enabling unified performance prediction across architectures and benchmarks. We develop efficient parameter estimation and numerical solving algorithms to support interpretable analysis and downstream applications.
Results: Evaluated on 12 mainstream benchmarks from the Open LLM Leaderboard, our approach achieves a 32% average reduction in prediction error and significantly improves cross-model comparability, establishing a novel paradigm for modeling LLM scaling behavior.
📝 Abstract
We propose a statistical framework built on latent variable modeling for scaling laws of large language models (LLMs). Our work is motivated by the rapid emergence of numerous new LLM families with distinct architectures and training strategies, evaluated on an increasing number of benchmarks. This heterogeneity makes a single global scaling curve inadequate for capturing how performance varies across families and benchmarks. To address this, we propose a latent variable modeling framework in which each LLM family is associated with a latent variable that captures the common underlying features in that family. An LLM's performance on different benchmarks is then driven by its latent skills, which are jointly determined by the latent variable and the model's own observable features. We develop an estimation procedure for this latent variable model and establish its statistical properties. We also design efficient numerical algorithms that support estimation and various downstream tasks. Empirically, we evaluate the approach on 12 widely used benchmarks from the Open LLM Leaderboard (v1/v2).