Scaling, Simplification, and Adaptation: Lessons from Pretraining on Machine-Translated Text

📅 2025-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the data scarcity barrier for pretraining low-resource languages, this work investigates the feasibility of using machine translation (MT)-generated text as a substitute for native corpora, focusing on three key questions: (1) scalability of MT data with model size, (2) impact of source-side simplification (e.g., LLM rewriting) on generalization to native text, and (3) adaptability of MT-pretrained models to limited native data. We pretrain GPT-2 variants (124M–774M parameters) on English→Indonesian and English→Tamil MT data and systematically evaluate syntactic probing and downstream task performance. Results show that MT data enables effective scaling; source-side simplification harms generalization to native text; and fine-tuning on only a small amount of native data surpasses models trained exclusively on native data. This is the first systematic study to empirically validate the efficacy and transfer potential of MT data in large-scale multilingual pretraining, offering a novel pathway to mitigate the “multilingual curse.”

Technology Category

Application Category

📝 Abstract
Most languages lack sufficient data for large-scale monolingual pretraining, creating a "data wall." Multilingual pretraining helps but is limited by language imbalance and the "curse of multilinguality." An alternative is to translate high-resource text with machine translation (MT), which raises three questions: (1) How does MT-derived data scale with model capacity? (2) Can source-side transformations (e.g., simplifying English with an LLM) improve generalization to native text? (3) How well do models pretrained on MT-derived data adapt when continually trained on limited native text? We investigate these questions by translating English into Indonesian and Tamil--two typologically distant, lower-resource languages--and pretraining GPT-2 models (124M-774M) on native or MT-derived corpora from raw and LLM-simplified English. We evaluate cross-entropy loss on native text, along with accuracy on syntactic probes and downstream tasks. Our results show that (1) MT-pretrained models benefit from scaling; (2) source-side simplification harms generalization to native text; and (3) adapting MT-pretrained models on native text often yields better performance than native-only models, even with less native data. However, tasks requiring cultural nuance (e.g., toxicity detection) demand more exposure to native data.
Problem

Research questions and friction points this paper is trying to address.

Scaling machine translation pretraining effects with model capacity
Improving generalization using source-side text simplification techniques
Adapting MT-pretrained models with limited native text data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pretraining GPT-2 models on machine-translated text
Scaling model capacity improves MT-pretrained performance
Adapting MT-pretrained models with native text enhances results
🔎 Similar Papers
No similar papers found.
Dan John Velasco
Dan John Velasco
Samsung Research Philippines
Natural Language ProcessingDeep Learning
M
Matthew Theodore Roque
Samsung R&D Institute Philippines