Exploring Cross-Lingual Knowledge Transfer via Transliteration-Based MLM Fine-Tuning for Critically Low-resource Chakma Language

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Chakma, an extremely low-resource Indo-Aryan language, suffers from severe underrepresentation in pretrained language models. Method: We propose a cross-lingual transfer approach grounded in Bengali-script transliteration: (1) constructing the first contextually coherent, human-verified Chakma transliteration corpus; and (2) fine-tuning six multilingual encoders—including mBERT, XLM-RoBERTa, and BanglaBERT—via masked language modeling (MLM). Contribution/Results: Our core innovation lies in leveraging transliteration as a representational bridge to mitigate the semantic and distributional gap between low- and high-resource languages. Experiments show that fine-tuned models achieve up to 73.54% token accuracy and a perplexity of 2.90 on Chakma—substantially outperforming baselines. The publicly released dataset establishes critical infrastructure for future low-resource NLP research on Chakma and related languages.

Technology Category

Application Category

📝 Abstract
As an Indo-Aryan language with limited available data, Chakma remains largely underrepresented in language models. In this work, we introduce a novel corpus of contextually coherent Bangla-transliterated Chakma, curated from Chakma literature, and validated by native speakers. Using this dataset, we fine-tune six encoder-based multilingual and regional transformer models (mBERT, XLM-RoBERTa, DistilBERT, DeBERTaV3, BanglaBERT, and IndicBERT) on masked language modeling (MLM) tasks. Our experiments show that fine-tuned multilingual models outperform their pre-trained counterparts when adapted to Bangla-transliterated Chakma, achieving up to 73.54% token accuracy and a perplexity as low as 2.90. Our analysis further highlights the impact of data quality on model performance and shows the limitations of OCR pipelines for morphologically rich Indic scripts. Our research demonstrates that Bangla-transliterated Chakma can be very effective for transfer learning for Chakma language, and we release our manually validated monolingual dataset to encourage further research on multilingual language modeling for low-resource languages.
Problem

Research questions and friction points this paper is trying to address.

Addressing underrepresentation of Chakma in language models
Developing cross-lingual transfer via Bangla-transliterated Chakma corpus
Evaluating MLM fine-tuning for critically low-resource language
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuned multilingual models via transliteration-based MLM
Used curated Bangla-transliterated Chakma corpus
Achieved high token accuracy with low perplexity
🔎 Similar Papers
2017-08-30Conference on Empirical Methods in Natural Language ProcessingCitations: 73
A
Adity Khisa
IIT, University of Dhaka
N
Nusrat Jahan Lia
IIT, University of Dhaka
T
Tasnim Mahfuz Nafis
IIT, University of Dhaka
Z
Zarif Masud
Toronto Metropolitan University
T
Tanzir Pial
Stony Brook University
S
Shebuti Rayana
State University of New York, Old Westbury
Ahmedul Kabir
Ahmedul Kabir
Associate Professor, IIT, University of Dhaka
NLPAI/MLHealth InformaticsSoftware Analytics