Calibrating Beyond English: Language Diversity for Better Quantized Multilingual LLM

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the performance degradation of multilingual large language models under post-training quantization when calibrated exclusively with English data. The authors systematically evaluate five monolingual and three multilingual mixed calibration strategies across ten languages using both GPTQ and AWQ quantization methods, with experiments conducted on Llama3.1-8B and Qwen2.5-7B. Their findings reveal, for the first time, that incorporating multilingual or non-English calibration data significantly improves post-quantization perplexity. Notably, language-aligned calibration strategies consistently outperform generic English-only calibration, reducing average perplexity by up to 3.52 points. These results underscore the critical role of linguistic diversity in the calibration process for effective multilingual model quantization.

Technology Category

Application Category

📝 Abstract
Quantization is an effective technique for reducing the storage footprint and computational costs of Large Language Models (LLMs), but it often results in performance degradation. Existing post-training quantization methods typically use small, English-only calibration sets; however, their impact on multilingual models remains underexplored. We systematically evaluate eight calibration settings (five single-language and three multilingual mixes) on two quantizers (GPTQ, AWQ) on data from 10 languages. Our findings reveal a consistent trend: non-English and multilingual calibration sets significantly improve perplexity compared to English-only baselines. Specifically, we observe notable average perplexity gains across both quantizers on Llama3.1 8B and Qwen2.5 7B, with multilingual mixes achieving the largest overall reductions of up to 3.52 points in perplexity. Furthermore, our analysis indicates that tailoring calibration sets to the evaluation language yields the largest improvements for individual languages, underscoring the importance of linguistic alignment. We also identify specific failure cases where certain language-quantizer combinations degrade performance, which we trace to differences in activation range distributions across languages. These results highlight that static one-size-fits-all calibration is suboptimal and that tailoring calibration data, both in language and diversity, plays a crucial role in robustly quantizing multilingual LLMs.
Problem

Research questions and friction points this paper is trying to address.

quantization
multilingual LLMs
calibration
language diversity
perplexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

multilingual calibration
language diversity
post-training quantization
perplexity reduction
activation range distribution
🔎 Similar Papers
No similar papers found.