๐ค AI Summary
This study investigates whether performance gains in large language models stem primarily from increased computational scale or from proprietary technical innovations by developers. Leveraging a dataset of 809 models released between 2022 and 2025, the authors employ a scaling law regression model that incorporates both release time and developer fixed effects to systematically quantify the contributions of each factor. The analysis reveals that along the performance frontier, 80โ90% of performance variation is explained by training compute alone. However, in non-frontier regions, proprietary techniques substantially enhance training efficiency, with some organizations achieving up to a 40-fold performance advantage using identical computational resources. These findings provide the first empirical evidence of persistent developer-specific efficiency advantages outside the frontier, challenging the prevailing โcompute-centricโ paradigm in the field.
๐ Abstract
Do leading LLM developers possess a proprietary ``secret sauce'', or is LLM performance driven by scaling up compute? Using training and benchmark data for 809 models released between 2022 and 2025, we estimate scaling-law regressions with release-date and developer fixed effects. We find clear evidence of developer-specific efficiency advantages, but their importance depends on where models lie in the performance distribution. At the frontier, 80-90% of performance differences are explained by higher training compute, implying that scale--not proprietary technology--drives frontier advances. Away from the frontier, however, proprietary techniques and shared algorithmic progress substantially reduce the compute required to reach fixed capability thresholds. Some companies can systematically produce smaller models more efficiently. Strikingly, we also find substantial variation of model efficiency within companies; a firm can train two models with more than 40x compute efficiency difference. We also discuss the implications for AI leadership and capability diffusion.