🤖 AI Summary
Multilingual synthetic instruction data frequently suffer from machine translation artifacts, factual inaccuracies, and insufficient cultural localization. To address these issues, we propose MIDB—the first end-to-end quality enhancement framework specifically designed for multilingual instruction data—incorporating linguistic expert revision knowledge to jointly optimize content accuracy, translation fidelity, and cultural appropriateness. Our method employs supervised fine-tuning on 36.8K human-annotated multilingual revision samples and integrates three core components: instruction rewriting, cross-lingual consistency verification, and culture-sensitive modeling. Experiments span 16 languages and demonstrate substantial improvements in synthetic data quality across all dimensions. Fine-tuned multilingual large language models (MLLMs) exhibit significant gains in both instruction-following capability and cultural understanding, validating MIDB’s effectiveness and cross-lingual generalizability.
📝 Abstract
Despite doubts on data quality, instruction synthesis has been widely applied into instruction tuning (IT) of LLMs as an economic and rapid alternative. Recent endeavors focus on improving data quality for synthesized instruction pairs in English and have facilitated IT of English-centric LLMs. However, data quality issues in multilingual synthesized instruction pairs are even more severe, since the common synthesizing practice is to translate English synthesized data into other languages using machine translation (MT). Besides the known content errors in these English synthesized data, multilingual synthesized instruction data are further exposed to defects introduced by MT and face insufficient localization of the target languages. In this paper, we propose MIDB, a Multilingual Instruction Data Booster to automatically address the quality issues in multilingual synthesized data. MIDB is trained on around 36.8k revision examples across 16 languages by human linguistic experts, thereby can boost the low-quality data by addressing content errors and MT defects, and improving localization in these synthesized data. Both automatic and human evaluation indicate that not only MIDB steadily improved instruction data quality in 16 languages, but also the instruction-following and cultural-understanding abilities of multilingual LLMs fine-tuned on MIDB-boosted data were significantly enhanced.