🤖 AI Summary
Large language models (LLMs) suffer from hallucination in cross-lingual knowledge transfer, primarily due to misaligned factual representations across languages. To address this, we train small Transformer models from scratch on controlled synthetic multilingual data, systematically characterizing how representational alignment evolves dynamically during training. We propose two targeted interventions—data distribution modulation and tokenizer-aware tokenization strategies—to enhance cross-lingual transfer. Furthermore, we design a mutual information–driven metric and visualization toolkit to quantify inter-language extractability and representation coupling. Experiments demonstrate that explicitly improving cross-lingual representational consistency significantly suppresses hallucination and strengthens transfer robustness. This work provides an interpretable theoretical framework and reproducible technical methodology for understanding and optimizing the multilingual generalization capabilities of large models.
📝 Abstract
Large language models (LLMs) struggle with cross-lingual knowledge transfer: they hallucinate when asked in one language about facts expressed in a different language during training. This work introduces a controlled setting to study the causes and dynamics of this phenomenon by training small Transformer models from scratch on synthetic multilingual datasets. We identify a learning phase wherein a model develops either separate or unified representations of the same facts across languages, and show that unification is essential for cross-lingual transfer. We also show that the degree of unification depends on mutual information between facts and training data language, and on how easy it is to extract that language. Based on these insights, we develop methods to modulate the level of cross-lingual transfer by manipulating data distribution and tokenization, and we introduce metrics and visualizations to formally characterize their effects on unification. Our work shows how controlled settings can shed light on pre-training dynamics and suggests new directions for improving cross-lingual transfer in LLMs.