🤖 AI Summary
This work systematically identifies and mitigates language confusion in large language models (LLMs)—the instability in generating text in the user-specified language. We formally define and quantify this phenomenon for the first time, introducing the Language Confusion Benchmark (LCB), a comprehensive evaluation suite covering 15 languages, and proposing a scalable, efficient paradigm for multilingual consistency assessment. LCB-based evaluation reveals that model architecture, pretraining paradigms, and decoding strategies significantly impact language consistency; prominent models—including Llama Instruct and Mistral—exhibit severe confusion. We further demonstrate that few-shot prompting and multilingual supervised fine-tuning partially alleviate the issue, while preference optimization shows promising potential. To foster reproducible research, we open-source LCB, establishing a critical infrastructure and empirical foundation for advancing robustness and reliability in multilingual LLMs.
📝 Abstract
We investigate a surprising limitation of LLMs: their inability to consistently generate text in a user’s desired language. We create the Language Confusion Benchmark (LCB) to evaluate such failures, covering 15 typologically diverse languages with existing and newly-created English and multilingual prompts. We evaluate a range of LLMs on monolingual and cross-lingual generation reflecting practical use cases, finding that Llama Instruct and Mistral models exhibit high degrees of language confusion and even the strongest models fail to consistently respond in the correct language. We observe that base and English-centric instruct models are more prone to language confusion, which is aggravated by complex prompts and high sampling temperatures. We find that language confusion can be partially mitigated via few-shot prompting, multilingual SFT and preference tuning. We release our language confusion benchmark, which serves as a first layer of efficient, scalable multilingual evaluation.