The Impact of Model Scaling on Seen and Unseen Language Performance

📅 2025-01-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates scaling laws of multilingual large language models (LLMs) across 204 languages on text classification and machine translation. It examines how model size, pretraining resource distribution, and language visibility (seen vs. unseen) affect zero-shot and few-shot cross-lingual transfer performance. Using comparative analysis across parameter-scaled model families, a unified multilingual evaluation framework, and instruction-tuned variants, the work identifies three key findings: (1) increasing model size yields negligible gains in zero-shot classification but substantially improves two-shot classification accuracy; (2) total pretraining token count—not language proportion—is the stronger predictor of multilingual generalization; and (3) only instruction-tuned models exhibit positive scaling behavior in machine translation. These results provide empirically grounded principles for resource allocation and architectural design in multilingual LLM development.

Technology Category

Application Category

📝 Abstract
The rapid advancement of Large Language Models (LLMs), particularly those trained on multilingual corpora, has intensified the need for a deeper understanding of their performance across a diverse range of languages and model sizes. Our research addresses this critical need by studying the performance and scaling behavior of multilingual LLMs in text classification and machine translation tasks across 204 languages. We systematically examine both seen and unseen languages across three model families of varying sizes in zero-shot and few-shot settings. Our findings show significant differences in scaling behavior between zero-shot and two-shot scenarios, with striking disparities in performance between seen and unseen languages. Model scale has little effect on zero-shot performance, which remains mostly flat. However, in two-shot settings, larger models show clear linear improvements in multilingual text classification. For translation tasks, however, only the instruction-tuned model showed clear benefits from scaling. Our analysis also suggests that overall resource levels, not just the proportions of pretraining languages, are better predictors of model performance, shedding light on what drives multilingual LLM effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Multilingual Models
Text Classification
Translation Performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual Models
Resource Allocation
Task Performance Enhancement
🔎 Similar Papers
No similar papers found.