Modeling Romanized Hindi and Bengali: Dataset Creation and Multilingual LLM Integration

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multilingual models exhibit limited robustness for non-standard Romanized Hindi and Bengali text prevalent in South Asian social media, particularly with respect to phonetic variation, orthographic diversity, code-mixing, and low-resource adaptation. To address this, we introduce the first large-scale, high-diversity parallel dataset for Romanized-to-native script transliteration—comprising 1.8 million Hindi and 1.0 million Bengali sentence pairs—carefully curated to cover extensive phonological and orthographic variation. Leveraging the MarianMT framework, we train a multilingual sequence-to-sequence model specifically optimized for this task. Our approach significantly improves transliteration robustness in low-resource and code-mixed settings, outperforming state-of-the-art multilingual models across both BLEU and Character Error Rate (CER) metrics. This work bridges a critical gap in both high-quality transliteration resources and modeling capabilities for Romanized South Asian languages.

Technology Category

Application Category

📝 Abstract
The development of robust transliteration techniques to enhance the effectiveness of transforming Romanized scripts into native scripts is crucial for Natural Language Processing tasks, including sentiment analysis, speech recognition, information retrieval, and intelligent personal assistants. Despite significant advancements, state-of-the-art multilingual models still face challenges in handling Romanized script, where the Roman alphabet is adopted to represent the phonetic structure of diverse languages. Within the South Asian context, where the use of Romanized script for Indo-Aryan languages is widespread across social media and digital communication platforms, such usage continues to pose significant challenges for cutting-edge multilingual models. While a limited number of transliteration datasets and models are available for Indo-Aryan languages, they generally lack sufficient diversity in pronunciation and spelling variations, adequate code-mixed data for large language model (LLM) training, and low-resource adaptation. To address this research gap, we introduce a novel transliteration dataset for two popular Indo-Aryan languages, Hindi and Bengali, which are ranked as the 3rd and 7th most spoken languages worldwide. Our dataset comprises nearly 1.8 million Hindi and 1 million Bengali transliteration pairs. In addition to that, we pre-train a custom multilingual seq2seq LLM based on Marian architecture using the developed dataset. Experimental results demonstrate significant improvements compared to existing relevant models in terms of BLEU and CER metrics.
Problem

Research questions and friction points this paper is trying to address.

Develops transliteration techniques for Romanized Hindi and Bengali scripts
Addresses lack of diverse pronunciation and spelling variation datasets
Enhances multilingual LLM performance for low-resource Indo-Aryan languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Created large transliteration dataset for Hindi and Bengali
Pre-trained custom multilingual seq2seq LLM using Marian architecture
Achieved improved performance in BLEU and CER metrics
🔎 Similar Papers
No similar papers found.