🤖 AI Summary
The Multi30k dataset has long been confined to four Latin-script European languages, severely hindering multilingual multimodal machine translation (MMT) research across diverse writing systems and language families. To address this limitation, we introduce the first 30k-scale multilingual parallel corpus covering Arabic, Spanish, Ukrainian, Simplified Chinese, and Traditional Chinese—thereby extending MMT beyond Latin-script constraints. We employ the NLLB200-3.3B model for high-quality zero-shot translation and jointly evaluate semantic fidelity using cosine similarity and symmetric KL divergence. The resulting resource comprises over 30,000 high-quality parallel sentence pairs. Excluding Traditional Chinese, all language pairs achieve semantic similarity > 0.8 and KL divergence < 0.000251—matching state-of-the-art benchmark quality. This work establishes the first large-scale, cross-script, high-fidelity benchmark for non-Latin MMT research.
📝 Abstract
Multi30k is frequently cited in the multimodal machine translation (MMT) literature, offering parallel text data for training and fine-tuning deep learning models. However, it is limited to four languages: Czech, English, French, and German. This restriction has led many researchers to focus their investigations only on these languages. As a result, MMT research on diverse languages has been stalled because the official Multi30k dataset only represents European languages in Latin scripts. Previous efforts to extend Multi30k exist, but the list of supported languages, represented language families, and scripts is still very short. To address these issues, we propose MultiScript30k, a new Multi30k dataset extension for global languages in various scripts, created by translating the English version of Multi30k (Multi30k-En) using NLLB200-3.3B. The dataset consists of over (30000) sentences and provides translations of all sentences in Multi30k-En into Ar, Es, Uk, Zh_Hans and Zh_Hant. Similarity analysis shows that Multi30k extension consistently achieves greater than (0.8) cosine similarity and symmetric KL divergence less than (0.000251) for all languages supported except Zh_Hant which is comparable to the previous Multi30k extensions ArEnMulti30k and Multi30k-Uk. COMETKiwi scores reveal mixed assessments of MultiScript30k as a translation of Multi30k-En in comparison to the related work. ArEnMulti30k scores nearly equal MultiScript30k-Ar, but Multi30k-Uk scores $6.4%$ greater than MultiScript30k-Uk per split.