🤖 AI Summary
Chakma, an extremely low-resource Indo-Aryan language, suffers from severe underrepresentation in pretrained language models. Method: We propose a cross-lingual transfer approach grounded in Bengali-script transliteration: (1) constructing the first contextually coherent, human-verified Chakma transliteration corpus; and (2) fine-tuning six multilingual encoders—including mBERT, XLM-RoBERTa, and BanglaBERT—via masked language modeling (MLM). Contribution/Results: Our core innovation lies in leveraging transliteration as a representational bridge to mitigate the semantic and distributional gap between low- and high-resource languages. Experiments show that fine-tuned models achieve up to 73.54% token accuracy and a perplexity of 2.90 on Chakma—substantially outperforming baselines. The publicly released dataset establishes critical infrastructure for future low-resource NLP research on Chakma and related languages.
📝 Abstract
As an Indo-Aryan language with limited available data, Chakma remains largely underrepresented in language models. In this work, we introduce a novel corpus of contextually coherent Bangla-transliterated Chakma, curated from Chakma literature, and validated by native speakers. Using this dataset, we fine-tune six encoder-based multilingual and regional transformer models (mBERT, XLM-RoBERTa, DistilBERT, DeBERTaV3, BanglaBERT, and IndicBERT) on masked language modeling (MLM) tasks. Our experiments show that fine-tuned multilingual models outperform their pre-trained counterparts when adapted to Bangla-transliterated Chakma, achieving up to 73.54% token accuracy and a perplexity as low as 2.90. Our analysis further highlights the impact of data quality on model performance and shows the limitations of OCR pipelines for morphologically rich Indic scripts. Our research demonstrates that Bangla-transliterated Chakma can be very effective for transfer learning for Chakma language, and we release our manually validated monolingual dataset to encourage further research on multilingual language modeling for low-resource languages.