๐ค AI Summary
This work addresses the challenge of achieving efficient and linguistically balanced multilingual understanding and generation under constrained model size. The authors propose a novel multilingual scaling approach centered on efficiency and language balance, training a 3.35B-parameter model covering 70 languages. By leveraging large-scale multilingual pretraining, region-aware instruction tuning, and carefully calibrated data mixing ratios, they release both a globally balanced model and three region-specialized variants. The resulting models achieve state-of-the-art performance across translation quality, multilingual comprehension, and target-language generation tasks, while remaining practical for real-world deployment.
๐ Abstract
Tiny Aya redefines what a small multilingual language model can achieve. Trained on 70 languages and refined through region-aware posttraining, it delivers state-of-the-art in translation quality, strong multilingual understanding, and high-quality target-language generation, all with just 3.35B parameters. The release includes a pretrained foundation model, a globally balanced instruction-tuned variant, and three region-specialized models targeting languages from Africa, South Asia, Europe, Asia-Pacific, and West Asia. This report details the training strategy, data composition, and comprehensive evaluation framework behind Tiny Aya, and presents an alternative scaling path for multilingual AI: one centered on efficiency, balanced performance across languages, and practical deployment.