🤖 AI Summary
This work identifies a significant “cross-lingual knowledge barrier” in multilingual large language models (LLMs): while they perform well on surface-level cross-lingual tasks (e.g., translation, embedding alignment), they exhibit substantial limitations in deep knowledge transfer across languages—evident on benchmarks such as MMLU, Harry Potter QA, and TOFU. To overcome this bottleneck, the authors propose, for the first time, a lightweight fine-tuning method based on mixed-language general-domain data (e.g., WikiText), enabling effective cross-lingual conceptual alignment without domain-specific corpora. Experiments demonstrate consistent improvements of 12–28 percentage points across diverse cross-lingual evaluation tasks, substantially narrowing performance gaps. The study empirically confirms that current multilingual LLMs lack genuine cross-lingual conceptual alignment and provides an efficient, low-resource solution. All code is publicly released.
📝 Abstract
Large language models (LLMs) are typically multilingual due to pretraining on diverse multilingual corpora. But can these models relate corresponding concepts across languages, i.e., be crosslingual? This study evaluates state-of-the-art LLMs on inherently crosslingual tasks. We observe that while these models show promising surface-level crosslingual abilities on machine translation and embedding space analyses, they struggle with deeper crosslingual knowledge transfer, revealing a crosslingual knowledge barrier in both general (MMLU benchmark) and domain-specific (Harry Potter quiz and TOFU benchmark) contexts. Since simple inference-time mitigation methods offer only limited improvement, we propose fine-tuning of LLMs on mixed-language data, which effectively reduces these gaps, even when using out-of-domain datasets like WikiText. Our findings suggest the need for explicit optimization to unlock the full crosslingual potential of LLMs. Our code is publicly available at https://github.com/google-research/crosslingual-knowledge-barriers.