🤖 AI Summary
This work addresses the challenge of training high-performance large language models for low-resource languages due to severe data scarcity. The authors propose a systematic approach that synthesizes 4.5 trillion tokens of high-quality, domain-specific, reinforcement learning–oriented bilingual data, integrates a progressive curriculum strategy spanning 20 trillion tokens, and employs the efficient SnapPO reinforcement learning optimization framework to train a 102-billion-parameter bilingual mixture-of-experts (MoE) language model. This method achieves, for the first time, performance on par with state-of-the-art models in low-resource settings, demonstrating strong results across multiple benchmarks in both English and Korean, and significantly advancing the development of AI capabilities for under-resourced languages.
📝 Abstract
We introduce Solar Open, a 102B-parameter bilingual Mixture-of-Experts language model for underserved languages. Solar Open demonstrates a systematic methodology for building competitive LLMs by addressing three interconnected challenges. First, to train effectively despite data scarcity for underserved languages, we synthesize 4.5T tokens of high-quality, domain-specific, and RL-oriented data. Second, we coordinate this data through a progressive curriculum jointly optimizing composition, quality thresholds, and domain coverage across 20 trillion tokens. Third, to enable reasoning capabilities through scalable RL, we apply our proposed framework SnapPO for efficient optimization. Across benchmarks in English and Korean, Solar Open achieves competitive performance, demonstrating the effectiveness of this methodology for underserved language AI development.