🤖 AI Summary
This study addresses the current lack of systematic measurement of the open-source large language model (LLM) ecosystem, which hinders a clear understanding of global adoption patterns, evolutionary trajectories, and regional competitive dynamics. We present the first comprehensive evaluation framework encompassing approximately 1,500 prominent open-source LLMs, integrating multi-source data—including Hugging Face download counts, numbers of derivative models, inference market share, and performance benchmarks—through meta-analysis, community activity metrics, and market tracking. Our analysis reveals that, beginning in summer 2025, China’s open-source LLM ecosystem has significantly surpassed that of the United States and continues to widen its lead, offering authoritative, quantitative insights into the global LLM landscape for researchers, industry stakeholders, and policymakers.
📝 Abstract
We present a comprehensive adoption snapshot of the leading open language models and who is building them, focusing on the ~1.5K mainline open models from the likes of Alibaba's Qwen, DeepSeek, Meta's Llama, that are the foundation of an ecosystem crucial to researchers, entrepreneurs, and policy advisors. We document a clear trend where Chinese models overtook their counterparts built in the U.S. in the summer of 2025 and subsequently widened the gap over their western counterparts. We study a mix of Hugging Face downloads and model derivatives, inference market share, performance metrics and more to make a comprehensive picture of the ecosystem.