MultiScript30k: Leveraging Multilingual Embeddings to Extend Cross Script Parallel Data

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The Multi30k dataset has long been confined to four Latin-script European languages, severely hindering multilingual multimodal machine translation (MMT) research across diverse writing systems and language families. To address this limitation, we introduce the first 30k-scale multilingual parallel corpus covering Arabic, Spanish, Ukrainian, Simplified Chinese, and Traditional Chinese—thereby extending MMT beyond Latin-script constraints. We employ the NLLB200-3.3B model for high-quality zero-shot translation and jointly evaluate semantic fidelity using cosine similarity and symmetric KL divergence. The resulting resource comprises over 30,000 high-quality parallel sentence pairs. Excluding Traditional Chinese, all language pairs achieve semantic similarity > 0.8 and KL divergence < 0.000251—matching state-of-the-art benchmark quality. This work establishes the first large-scale, cross-script, high-fidelity benchmark for non-Latin MMT research.

Technology Category

Application Category

📝 Abstract
Multi30k is frequently cited in the multimodal machine translation (MMT) literature, offering parallel text data for training and fine-tuning deep learning models. However, it is limited to four languages: Czech, English, French, and German. This restriction has led many researchers to focus their investigations only on these languages. As a result, MMT research on diverse languages has been stalled because the official Multi30k dataset only represents European languages in Latin scripts. Previous efforts to extend Multi30k exist, but the list of supported languages, represented language families, and scripts is still very short. To address these issues, we propose MultiScript30k, a new Multi30k dataset extension for global languages in various scripts, created by translating the English version of Multi30k (Multi30k-En) using NLLB200-3.3B. The dataset consists of over (30000) sentences and provides translations of all sentences in Multi30k-En into Ar, Es, Uk, Zh_Hans and Zh_Hant. Similarity analysis shows that Multi30k extension consistently achieves greater than (0.8) cosine similarity and symmetric KL divergence less than (0.000251) for all languages supported except Zh_Hant which is comparable to the previous Multi30k extensions ArEnMulti30k and Multi30k-Uk. COMETKiwi scores reveal mixed assessments of MultiScript30k as a translation of Multi30k-En in comparison to the related work. ArEnMulti30k scores nearly equal MultiScript30k-Ar, but Multi30k-Uk scores $6.4%$ greater than MultiScript30k-Uk per split.
Problem

Research questions and friction points this paper is trying to address.

Extends multilingual dataset beyond four European languages
Addresses lack of diverse scripts in multimodal translation research
Provides parallel data for global languages using NLLB translation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends Multi30k dataset using NLLB200-3.3B translation
Covers diverse scripts and languages beyond European ones
Ensures high similarity metrics for most translations
🔎 Similar Papers
No similar papers found.
C
Christopher Driggers-Ellis
Computer and Information Science and Engineering, University of Florida, Gainesville, FL
D
Detravious Brinkley
Computer and Information Science and Engineering, University of Florida, Gainesville, FL
R
Ray Chen
Computer and Information Science and Engineering, University of Florida, Gainesville, FL
A
Aashish Dhawan
Computer and Information Science and Engineering, University of Florida, Gainesville, FL
Daisy Zhe Wang
Daisy Zhe Wang
University of Florida
DatabasesIn-Database Machine LearningProbabilistic Database SystemsProbabilistic Knowledge BasesProbabilistic Logic
Christan Grant
Christan Grant
Associate Professor, University of Florida
Interactive Machine LearningNatural Language ProcessingVisualizationData MiningPrivacy