🤖 AI Summary
Kurdish Sorani lacks high-quality, manually annotated named entity recognition (NER) resources, hindering NLP development for this under-resourced language. Method: We construct the first high-quality, human-annotated Sorani NER dataset comprising 64,563 tokens and conduct a systematic, controlled evaluation of classical (CRF) and neural (BiLSTM) models under identical preprocessing and evaluation protocols. Contribution/Results: CRF significantly outperforms BiLSTM in this low-resource setting (F1 = 0.825 vs. 0.706), challenging the assumption that deep learning inherently surpasses feature-engineered, structured models when training data is scarce. This underscores the critical role of discriminative feature engineering and probabilistic sequence modeling in low-resource NER. Our dataset fills a foundational gap in Sorani NLP infrastructure, and our empirical findings provide a pragmatic, resource-aware methodology for developing robust NER systems for underrepresented languages—advancing fairness, inclusivity, and global applicability in NLP.
📝 Abstract
This work contributes towards balancing the inclusivity and global applicability of natural language processing techniques by proposing the first 'name entity recognition' dataset for Kurdish Sorani, a low-resource and under-represented language, that consists of 64,563 annotated tokens. It also provides a tool for facilitating this task in this and many other languages and performs a thorough comparative analysis, including classic machine learning models and neural systems. The results obtained challenge established assumptions about the advantage of neural approaches within the context of NLP. Conventional methods, in particular CRF, obtain F1-scores of 0.825, outperforming the results of BiLSTM-based models (0.706) significantly. These findings indicate that simpler and more computationally efficient classical frameworks can outperform neural architectures in low-resource settings.