A Latent Variable Framework for Scaling Laws in Large Language Models

📅 2025-12-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The proliferation of heterogeneous architectures, diverse training strategies, and rapidly expanding evaluation benchmarks has rendered conventional universal scaling laws inadequate for accurately characterizing cross-model-family and cross-benchmark performance trends in large language models (LLMs). Method: This paper proposes a latent-variable-based statistical modeling framework that jointly captures family-level commonalities (via latent variables) and model-specific idiosyncrasies (via observable features), thereby overcoming the limitations of monolithic scaling laws and enabling unified performance prediction across architectures and benchmarks. We develop efficient parameter estimation and numerical solving algorithms to support interpretable analysis and downstream applications. Results: Evaluated on 12 mainstream benchmarks from the Open LLM Leaderboard, our approach achieves a 32% average reduction in prediction error and significantly improves cross-model comparability, establishing a novel paradigm for modeling LLM scaling behavior.

Technology Category

Application Category

📝 Abstract
We propose a statistical framework built on latent variable modeling for scaling laws of large language models (LLMs). Our work is motivated by the rapid emergence of numerous new LLM families with distinct architectures and training strategies, evaluated on an increasing number of benchmarks. This heterogeneity makes a single global scaling curve inadequate for capturing how performance varies across families and benchmarks. To address this, we propose a latent variable modeling framework in which each LLM family is associated with a latent variable that captures the common underlying features in that family. An LLM's performance on different benchmarks is then driven by its latent skills, which are jointly determined by the latent variable and the model's own observable features. We develop an estimation procedure for this latent variable model and establish its statistical properties. We also design efficient numerical algorithms that support estimation and various downstream tasks. Empirically, we evaluate the approach on 12 widely used benchmarks from the Open LLM Leaderboard (v1/v2).
Problem

Research questions and friction points this paper is trying to address.

Develops a latent variable framework for LLM scaling laws
Addresses heterogeneity in model families and benchmarks
Estimates latent skills from observable features and family variables
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent variable modeling for scaling laws
Captures family-specific features via latent variables
Efficient algorithms for estimation and downstream tasks
🔎 Similar Papers
No similar papers found.