π€ AI Summary
This study addresses the challenge that large language models often fail to accurately align their output language when users mix languages (e.g., Korean and English), exhibiting a bias toward non-English responses and unstable language switching. The work presents the first systematic evaluation of this phenomenon, introducing OLAβa multilingual alignment benchmark spanning intra-sentence code-switching to mismatches between instructions and content. Furthermore, it proposes a lightweight Code-Switching Aware DPO fine-tuning method that, with only approximately 1K samples, substantially improves output language alignment accuracy. These findings indicate that the issue stems from insufficient alignment rather than inherent model limitations, offering an effective solution for ensuring language consistency in multilingual interactions.
π Abstract
Code-switching, alternating between languages within a conversation, is natural for multilingual users, yet poses fundamental challenges for large language models (LLMs). When a user code-switches in their prompt to an LLM, they typically do not specify the expected language of the LLM response, and thus LLMs must infer the output language from contextual and pragmatic cues. We find that current LLMs systematically fail to align with this expectation, responding in undesired languages even when cues are clear to humans. We introduce OLA, a benchmark to evaluate LLMs'Output Language Alignment in code-switched interactions. OLA focuses on Korean--English code-switching and spans simple intra-sentential mixing to instruction-content mismatches. Even frontier models frequently misinterpret implicit language expectation, exhibiting a bias toward non-English responses. We further show this bias generalizes beyond Korean to Chinese and Indonesian pairs. Models also show instability through mid-response switching and language intrusions. Chain-of-Thought prompting fails to resolve these errors, indicating weak pragmatic reasoning about output language. However, Code-Switching Aware DPO with minimal data (about 1K examples) substantially reduces misalignment, suggesting these failures stem from insufficient alignment rather than fundamental limitations. Our results highlight the need to align multilingual LLMs with users'implicit expectations in real-world code-switched interactions.