Code-Switching Curriculum Learning for Multilingual Transfer in LLMs

📅 2024-11-04
🏛️ arXiv.org
📈 Citations: 5
Influential: 1
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit significant performance degradation on low-resource languages, primarily due to skewed pretraining data distributions. Method: Inspired by code-switching in human second-language acquisition, we propose Code-Switching Curriculum Learning (CSCL), a novel staged curriculum framework that uniquely integrates token-level and sentence-level code-switching for progressive multilingual pretraining—overcoming limitations of monolingual continued pretraining. Our approach encompasses multi-granularity data construction, dynamic curriculum scheduling, and cross-lingual evaluation. Contribution/Results: We validate CSCL on Qwen 2, Gemma 2, and Phi 3.5. Experiments demonstrate consistent improvements in cross-lingual transfer across Indonesian (low-resource), Korean (medium-resource), and Japanese (high-resource), with the largest gains in low-resource settings. Moreover, CSCL enhances model safety robustness, effectively mitigating spurious correlations between language resource scarcity and safety alignment.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) now exhibit near human-level performance in various tasks, but their performance drops drastically after a handful of high-resource languages due to the imbalance in pre-training data. Inspired by the human process of second language acquisition, particularly code-switching (the practice of language alternation in a conversation), we propose code-switching curriculum learning (CSCL) to enhance cross-lingual transfer for LLMs. CSCL mimics the stages of human language learning by progressively training models with a curriculum consisting of 1) token-level code-switching, 2) sentence-level code-switching, and 3) monolingual corpora. Using Qwen 2 as our underlying model, we demonstrate the efficacy of the CSCL in improving language transfer to Korean, achieving significant performance gains compared to monolingual continual pre-training methods. Ablation studies reveal that both token- and sentence-level code-switching significantly enhance cross-lingual transfer and that curriculum learning amplifies these effects. We also extend our findings into various languages, including Japanese (high-resource) and Indonesian (low-resource), and using two additional models (Gemma 2 and Phi 3.5). We further show that CSCL mitigates spurious correlations between language resources and safety alignment, presenting a robust, efficient framework for more equitable language transfer in LLMs. We observe that CSCL is effective for low-resource settings where high-quality, monolingual corpora for language transfer are hardly available.
Problem

Research questions and friction points this paper is trying to address.

Improves multilingual transfer in LLMs for low-resource languages
Addresses performance drop in high-resource language-dominated LLMs
Enhances cross-lingual transfer via code-switching curriculum learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Code-switching curriculum learning for multilingual transfer
Progressive training with token and sentence code-switching
Enhances cross-lingual transfer in low-resource languages
🔎 Similar Papers
No similar papers found.