๐ค AI Summary
This study examines the tension between decentralized large language model (LLM) development and increasingly centralized influence of evaluation benchmarks in the LLM era. Method: We construct a networked agent-based model integrating the Stanford Foundation Model Ecosystem and the Evidently AI Benchmark Registry, employing institutional inference, betweenness centrality path analysis, and Gini coefficient quantification to characterize benchmark influence concentration. Contribution/Results: We find that only 15% of benchmark nodes account for 80% of critical evaluation paths; the U.S., China, and EU collectively contribute 83% of benchmarks; and global benchmark authority is highly skewed (Gini coefficient = 0.89). While this concentration enhances standardization, comparability, and reproducibility, it introduces trade-offsโincluding path dependency, selective visibility, and diminished discriminative power. Simulation experiments demonstrate that introducing novel, diverse benchmarks significantly mitigates structural concentration.
๐ Abstract
Large language models are proliferating, and so are the benchmarks that serve as their common yardsticks. We ask how the agglomeration patterns of these two layers compare: do they evolve in tandem or diverge? Drawing on two curated proxies for the ecosystem, the Stanford Foundation-Model Ecosystem Graph and the Evidently AI benchmark registry, we find complementary but contrasting dynamics. Model creation has broadened across countries and organizations and diversified in modality, licensing, and access. Benchmark influence, by contrast, displays centralizing patterns: in the inferred benchmark-author-institution network, the top 15% of nodes account for over 80% of high-betweenness paths, three countries produce 83% of benchmark outputs, and the global Gini for inferred benchmark authority reaches 0.89. An agent-based simulation highlights three mechanisms: higher entry of new benchmarks reduces concentration; rapid inflows can temporarily complicate coordination in evaluation; and stronger penalties against over-fitting have limited effect. Taken together, these results suggest that concentrated benchmark influence functions as coordination infrastructure that supports standardization, comparability, and reproducibility amid rising heterogeneity in model production, while also introducing trade-offs such as path dependence, selective visibility, and diminishing discriminative power as leaderboards saturate.