Bhaasha, Bhasa, Zaban: A Survey for Low-Resourced Languages in South Asia -- Current Stage and Challenges

๐Ÿ“… 2025-09-15
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work systematically assesses the current state and challenges of natural language processing (NLP) for low-resource South Asian languagesโ€”over 650 in total, most lacking computational resources or pretrained models. Through a literature review and resource inventory spanning 2020โ€“2024, it analyzes the adaptability of Transformer-based architectures (e.g., BERT, T5, GPT) across data availability, model design, and task formulation. Key challenges identified include severe data scarcity in critical domains (e.g., healthcare), difficulties in modeling code-mixed text, and the absence of standardized, multilingual evaluation benchmarks. To address these, the paper proposes a culturally grounded, unified evaluation framework tailored to South Asian linguistic and sociolinguistic realities. Furthermore, it releases the first open-source, multilingual, multi-task, and multi-source NLP resource repository for South Asia. This contribution provides theoretical foundations, reproducible evaluation baselines, and infrastructural support for developing fair, robust, and scalable low-resource language models.

Technology Category

Application Category

๐Ÿ“ Abstract
Rapid developments of large language models have revolutionized many NLP tasks for English data. Unfortunately, the models and their evaluations for low-resource languages are being overlooked, especially for languages in South Asia. Although there are more than 650 languages in South Asia, many of them either have very limited computational resources or are missing from existing language models. Thus, a concrete question to be answered is: Can we assess the current stage and challenges to inform our NLP community and facilitate model developments for South Asian languages? In this survey, we have comprehensively examined current efforts and challenges of NLP models for South Asian languages by retrieving studies since 2020, with a focus on transformer-based models, such as BERT, T5, & GPT. We present advances and gaps across 3 essential aspects: data, models, & tasks, such as available data sources, fine-tuning strategies, & domain applications. Our findings highlight substantial issues, including missing data in critical domains (e.g., health), code-mixing, and lack of standardized evaluation benchmarks. Our survey aims to raise awareness within the NLP community for more targeted data curation, unify benchmarks tailored to cultural and linguistic nuances of South Asia, and encourage an equitable representation of South Asian languages. The complete list of resources is available at: https://github.com/trust-nlp/LM4SouthAsia-Survey.
Problem

Research questions and friction points this paper is trying to address.

Assessing NLP model challenges for low-resourced South Asian languages
Examining data gaps and missing benchmarks for 650+ languages
Addressing limited computational resources and evaluation standardization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Surveying transformer models like BERT
Analyzing data gaps and fine-tuning strategies
Proposing unified benchmarks for linguistic nuances
๐Ÿ”Ž Similar Papers
No similar papers found.
S
Sampoorna Poria
Dept of Computer Science & Engineering, West Bengal University of Technology
Xiaolei Huang
Xiaolei Huang
University of Memphis
Machine LearningNatural Language ProcessingHealth InformaticsLLM for Sciences