Code-Switching In-Context Learning for Cross-Lingual Transfer of Large Language Models

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from a “translation barrier”: their implicit reliance on English as an internal representation leads to substantial performance degradation in non-English reasoning. Existing cross-lingual in-context learning (X-ICL) approaches—typically employing monolingual examples—fail to mitigate, and may even exacerbate, this issue. To address it, we propose Code-Switching In-Context Learning (CSICL), the first prompting method to integrate a progressive code-switching mechanism: instructions and demonstrations are systematically shifted from the target language to English, explicitly guiding and bridging the latent English-centric reasoning pathway. CSICL requires no model fine-tuning and is validated across four LLMs, six datasets, and ten languages. It yields an average +3.1 percentage-point gain in target-language performance; improvements reach +1.9 points for high-resource and +14.7 points for low-resource languages, significantly enhancing cross-lingual generalization and linguistic inclusivity.

Technology Category

Application Category

📝 Abstract
While large language models (LLMs) exhibit strong multilingual abilities, their reliance on English as latent representations creates a translation barrier, where reasoning implicitly depends on internal translation into English. When this process fails, performance in non-English languages deteriorates sharply, limiting the inclusiveness of LLM-based applications. Existing cross-lingual in-context learning (X-ICL) methods primarily leverage monolingual demonstrations, often failing to mitigate this barrier and instead reinforcing it. In this work, we introduce code-switching in-context learning (CSICL), a simple yet effective prompting strategy that progressively transitions from a target language to English within demonstrations and instruction to facilitate their latent reasoning in English. By explicitly scaffolding the reasoning process through controlled code-switching, CSICL acts as an implicit linguistic bridge that enhances cross-lingual alignment and reduces reliance on the translation barrier. We conduct extensive experiments across 4 LLMs, 6 datasets, and 10 languages, spanning both knowledge-intensive and reasoning-oriented domains. Our results demonstrate that CSICL consistently outperforms X-ICL baselines, achieving gains of 3.1%p and 1.9%p in both target and unseen languages, respectively. The improvement is even more pronounced in low-resource settings, with gains of 14.7% in target and 5.3% in unseen languages. These findings establish code-switching as a principled and robust approach for overcoming the translation barrier during inference, moving LLMs toward more equitable and effective multilingual systems.
Problem

Research questions and friction points this paper is trying to address.

Overcoming translation barriers in multilingual large language models
Enhancing cross-lingual transfer through code-switching demonstrations
Improving performance in non-English and low-resource languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Code-switching prompts transition from target language to English
Scaffolds reasoning process to enhance cross-lingual alignment
Reduces reliance on implicit translation barriers in LLMs
🔎 Similar Papers
No similar papers found.