ATLAS: Adaptive Transfer Scaling Laws for Multilingual Pretraining, Finetuning, and Decoding the Curse of Multilinguality

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior scaling law studies are predominantly English-centric, neglecting multilingual settings. Method: This work systematically investigates scaling laws for multilingual models spanning 10M–8B parameters across 400+ languages, uncovering cross-lingual transfer mechanisms and the “multilinguality curse.” We propose Adaptive Transfer Scaling (ATLAS), which constructs a language-pair transfer matrix and identifies, for the first time, computational inflection points for zero-shot pretraining and fine-tuning—enabling language-agnostic optimal scaling. Contribution/Results: Based on 774 large-scale experiments, combined with regression analysis, cross-lingual performance prediction, and empirical transfer modeling, ATLAS improves R² over existing scaling laws by >0.3. It quantifies reciprocity across 1,444 language pairs and establishes a scalable multilingual training paradigm and resource-allocation principle for non-English-dominant scenarios.

Technology Category

Application Category

📝 Abstract
Scaling laws research has focused overwhelmingly on English -- yet the most prominent AI models explicitly serve billions of international users. In this work, we undertake the largest multilingual scaling laws study to date, totaling 774 multilingual training experiments, spanning 10M-8B model parameters, 400+ training languages and 48 evaluation languages. We introduce the Adaptive Transfer Scaling Law (ATLAS) for both monolingual and multilingual pretraining, which outperforms existing scaling laws' out-of-sample generalization often by more than 0.3 R^2. Our analyses of the experiments shed light on multilingual learning dynamics, transfer properties between languages, and the curse of multilinguality. First, we derive a cross-lingual transfer matrix, empirically measuring mutual benefit scores between 38 x 38=1444 language pairs. Second, we derive a language-agnostic scaling law that reveals how to optimally scale model size and data when adding languages without sacrificing performance. Third, we identify the computational crossover points for when to pretrain from scratch versus finetune from multilingual checkpoints. We hope these findings provide the scientific foundation for democratizing scaling laws across languages, and enable practitioners to efficiently scale models -- beyond English-first AI.
Problem

Research questions and friction points this paper is trying to address.

Developing scaling laws for multilingual pretraining beyond English dominance
Quantifying cross-lingual transfer benefits across 1444 language pairs
Optimizing model scaling strategies to overcome multilinguality performance limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Transfer Scaling Law for multilingual pretraining
Cross-lingual transfer matrix measuring language pair benefits
Language-agnostic scaling law for optimal multilingual performance
🔎 Similar Papers
No similar papers found.