π€ AI Summary
Existing measures of linguistic distance lack a unified and scalable framework. This work proposes the Attention Transport Distance (ATD), which, for the first time, integrates the self-attention mechanism of multilingual Transformers with optimal transport theory by treating attention matrices as probability distributions. This approach yields a tokenizer-agnostic metric for quantifying cross-lingual representational distance. ATD not only accurately recovers established language phylogenies but also reveals the influence of geographic proximity and language contact. Furthermore, when employed as a regularizer, ATD effectively enhances machine translation performance in low-resource settings, offering a novel tool for the analysis of cross-lingual representations.
π Abstract
Understanding the distance between human languages is central to linguistics, anthropology, and tracing human evolutionary history. Yet, while linguistics has long provided rich qualitative accounts of cross-linguistic variation, a unified and scalable quantitative approach to measuring language distance remains lacking. In this paper, we introduce a method that leverages pretrained multilingual language models as systematic instruments for linguistic measurement. Specifically, we show that the spontaneously emerged attention mechanisms of these models provide a robust, tokenization-agnostic measure of cross-linguistic distance, termed Attention Transport Distance (ATD). By treating attention matrices as probability distributions and measuring their geometric divergence via optimal transport, we quantify the representational distance between languages during translation. Applying ATD to a large and diverse set of languages, we demonstrate that the resulting distances recover established linguistic groupings with high fidelity and reveal patterns aligned with geographic and contact-induced relationships. Furthermore, incorporating ATD as a regularizer improves transfer performance in low-resource machine translation. Our results establish a principled foundation for testing linguistic hypotheses using artificial neural networks. This framework transforms multilingual models into powerful tools for quantitative linguistic discovery, facilitating more equitable multilingual AI.