Can Code-Switched Texts Activate a Knowledge Switch in LLMs? A Case Study on English-Korean Code-Switching

📅 2024-10-24
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether code-switching (CS) can serve as a knowledge activation mechanism to enhance language-specific reasoning in large language models (LLMs) for low-resource languages. Focusing on English–Korean CS, we construct EnKoQA, a synthetic question-answering dataset, and—novelly—decouple knowledge activation into “recognition” and “application” stages. Through controlled multi-model experiments, we find that CS input significantly outperforms monolingual English input, effectively eliciting dormant language-specific knowledge in LLMs. Moreover, the performance gain correlates positively with the model’s native Korean proficiency, indicating that CS efficacy fundamentally depends on the quality of the target language’s internal representation. To our knowledge, this is the first empirical demonstration that CS functions as a controllable “knowledge switch,” offering a new paradigm for improving low-resource language reasoning in LLMs.

Technology Category

Application Category

📝 Abstract
Code-switching (CS), a phenomenon where multilingual speakers alternate between languages in a discourse, can convey subtle cultural and linguistic nuances that can be otherwise lost in translation. Recent state-of-the-art multilingual large language models (LLMs) demonstrate excellent multilingual abilities in various aspects including understanding CS, but the power of CS in eliciting language-specific knowledge is yet to be discovered. Therefore, we investigate the effectiveness of code-switching on a wide range of multilingual LLMs in terms of knowledge activation, or the act of identifying and leveraging knowledge for reasoning. To facilitate the research, we first present EnKoQA, a synthetic English-Korean CS question-answering dataset. We provide a comprehensive analysis on a variety of multilingual LLMs by subdividing activation process into knowledge identification and knowledge leveraging. Our experiments demonstrate that compared to English text, CS can faithfully activate knowledge inside LLMs, especially on language-specific domains. In addition, the performance gap between CS and English is larger in models that show excellent monolingual abilities, suggesting that there exists a correlation with CS and Korean proficiency.
Problem

Research questions and friction points this paper is trying to address.

Investigates if code-switching activates knowledge in LLMs for low-resource languages
Examines English-Korean code-switching's impact on LLM reasoning and task performance
Assesses whether CS enhances language-specific knowledge retrieval in multilingual models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using code-switching to activate LLM knowledge
Creating EnKoQA dataset for English-Korean tasks
Analyzing knowledge identification and leveraging processes
🔎 Similar Papers
No similar papers found.