π€ AI Summary
This study investigates the necessity of parallel data in pretraining for constructing cross-lingual aligned representations. By systematically training multilingual models with varying proportions of parallel corpora and conducting controlled experiments, multidimensional evaluations, and neuron-level analyses, the authors demonstrate that cross-lingual alignment can emerge naturally without explicit parallel signals. The findings reveal that parallel data only marginally accelerates representation sharing during early pretraining stages and slightly reduces the number of language-specific neurons, while yielding overall alignment performance comparable to that of purely monolingual pretraining. These results challenge the prevailing assumption that parallel data is essential for effective cross-lingual representation learning in multilingual models.
π Abstract
Shared multilingual representations are essential for cross-lingual tasks and knowledge transfer across languages. This study looks at the impact of parallel data, i.e. translated sentences, in pretraining as a signal to trigger representations that are aligned across languages. We train reference models with different proportions of parallel data and show that parallel data seem to have only a minimal effect on the cross-lingual alignment. Based on multiple evaluation methods, we find that the effect is limited to potentially accelerating the representation sharing in the early phases of pretraining, and to decreasing the amount of language-specific neurons in the model. Cross-lingual alignment seems to emerge on similar levels even without the explicit signal from parallel data.