🤖 AI Summary
Large language models (LLMs) suffer from constrained Chinese capability gains during continual pretraining (CPT) due to misaligned coupling between the additional language mixture ratio (ALMR) and learning rate (LR).
Method: Building upon Llama-3 8B and 70B base models, we systematically investigate the ALMR–LR coupling mechanism. We propose and empirically validate a novel ALMR–LR coupling scaling law, enabling principled hyperparameter transfer across model scales (8B → 70B). Our approach integrates CPT, coordinated hyperparameter search, and domain-adaptive fine-tuning.
Results: The method significantly enhances Chinese understanding and generation, while concurrently improving performance on downstream subtasks—including mathematical reasoning, code generation, and sentiment intelligence. The optimized 70B model has been successfully deployed in a production dialogue system, demonstrating industrial-grade robustness and efficacy. Crucially, the scaling law bridges the performance gap between small-scale experimentation and full-scale deployment, offering generalizable guidance for multilingual LLM adaptation.
📝 Abstract
Large Language Models (LLM) often needs to be Continual Pre-Trained (CPT) to obtain the unfamiliar language skill or adapt into new domains. The huge training cost of CPT often asks for cautious choice of key hyper-parameters such as the mixture ratio of extra language or domain corpus. However, there is no systematic study which bridge the gap between the optimal mixture ratio and the actual model performance, and the gap between experimental scaling law and the actual deployment in the full model size. In this paper, we perform CPT on Llama-3 8B and 70B to enhance its Chinese ability. We study the optimal correlation between the Additional Language Mixture Ratio (ALMR) and the Learning Rate (LR) on the 8B size which directly indicate the optimal experimental set up. By thorough choice of hyper-parameter, and subsequent fine-tuning, the model capability is improved not only on the Chinese-related benchmark, but also some specific domains including math, coding and emotional intelligence. We deploy the final 70B version of LLM on an real-life chat system which obtain satisfying performance.