🤖 AI Summary
Multilingual large language models (LLMs) exhibit significantly degraded performance on non-Latin-script languages (e.g., Chinese, Arabic, Hindi), primarily due to Latin-script dominance in pretraining data and neglect of cross-script phonological commonalities. To address this, we propose a phoneme-aware, script-agnostic representation framework: (1) explicitly incorporating phonemic transcriptions (e.g., IPA) into prompting and in-context learning (ICL); (2) designing a hybrid ICL retrieval strategy; and (3) integrating multilingual prompt engineering with cross-script representation alignment. Our approach leverages phonological-level semantic regularities to bridge modeling gaps induced by orthographic divergence. Experiments across diverse benchmarks demonstrate up to 15.1% absolute improvement on non-Latin-script languages and up to 12.6% on Latin-script languages, substantially narrowing the performance gap between the two script families.
📝 Abstract
Although multilingual LLMs have achieved remarkable performance across benchmarks, we find they continue to underperform on non-Latin script languages across contemporary LLM families. This discrepancy arises from the fact that LLMs are pretrained with orthographic scripts, which are dominated by Latin characters that obscure their shared phonology with non-Latin scripts. We propose leveraging phonemic transcriptions as complementary signals to induce script-invariant representations. Our study demonstrates that integrating phonemic signals improves performance across both non-Latin and Latin script languages, with a particularly significant impact on closing the performance gap between the two. Through detailed experiments, we show that phonemic and orthographic scripts retrieve distinct examples for in-context learning (ICL). This motivates our proposed Mixed-ICL retrieval strategy, where further aggregation from both leads to our significant performance improvements for both Latin script languages (up to 12.6%) and non-Latin script languages (up to 15.1%) compared to randomized ICL retrieval.