🤖 AI Summary
This study investigates whether code-switching (CS) can serve as a knowledge activation mechanism to enhance language-specific reasoning in large language models (LLMs) for low-resource languages. Focusing on English–Korean CS, we construct EnKoQA, a synthetic question-answering dataset, and—novelly—decouple knowledge activation into “recognition” and “application” stages. Through controlled multi-model experiments, we find that CS input significantly outperforms monolingual English input, effectively eliciting dormant language-specific knowledge in LLMs. Moreover, the performance gain correlates positively with the model’s native Korean proficiency, indicating that CS efficacy fundamentally depends on the quality of the target language’s internal representation. To our knowledge, this is the first empirical demonstration that CS functions as a controllable “knowledge switch,” offering a new paradigm for improving low-resource language reasoning in LLMs.
📝 Abstract
Code-switching (CS), a phenomenon where multilingual speakers alternate between languages in a discourse, can convey subtle cultural and linguistic nuances that can be otherwise lost in translation. Recent state-of-the-art multilingual large language models (LLMs) demonstrate excellent multilingual abilities in various aspects including understanding CS, but the power of CS in eliciting language-specific knowledge is yet to be discovered. Therefore, we investigate the effectiveness of code-switching on a wide range of multilingual LLMs in terms of knowledge activation, or the act of identifying and leveraging knowledge for reasoning. To facilitate the research, we first present EnKoQA, a synthetic English-Korean CS question-answering dataset. We provide a comprehensive analysis on a variety of multilingual LLMs by subdividing activation process into knowledge identification and knowledge leveraging. Our experiments demonstrate that compared to English text, CS can faithfully activate knowledge inside LLMs, especially on language-specific domains. In addition, the performance gap between CS and English is larger in models that show excellent monolingual abilities, suggesting that there exists a correlation with CS and Korean proficiency.