Human-Annotated NER Dataset for the Kyrgyz Language

๐Ÿ“… 2025-09-23
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the scarcity of high-quality manually annotated data for Kyrgyz named entity recognition (NER), this study introduces KyrgyzNERโ€”the first large-scale, fine-grained, human-annotated NER dataset for Kyrgyz, comprising 1,499 news articles, 10,900 sentences, and 39,075 entity mentions across 27 entity types. We establish linguistically informed annotation guidelines tailored to Kyrgyz morphology and syntax. We systematically evaluate both traditional conditional random fields (CRF) and multilingual pre-trained language models (e.g., mRoBERTa) on this benchmark. Experimental results demonstrate that mRoBERTa achieves the best trade-off between precision and recall, significantly outperforming CRF-based approaches; other multilingual models also exhibit robust performance. KyrgyzNER fills a critical gap in NER resources for low-resource Turkic languages and serves as a foundational benchmark for cross-lingual information extraction research, providing both empirical evidence and standardized evaluation infrastructure.

Technology Category

Application Category

๐Ÿ“ Abstract
We introduce KyrgyzNER, the first manually annotated named entity recognition dataset for the Kyrgyz language. Comprising 1,499 news articles from the 24.KG news portal, the dataset contains 10,900 sentences and 39,075 entity mentions across 27 named entity classes. We show our annotation scheme, discuss the challenges encountered in the annotation process, and present the descriptive statistics. We also evaluate several named entity recognition models, including traditional sequence labeling approaches based on conditional random fields and state-of-the-art multilingual transformer-based models fine-tuned on our dataset. While all models show difficulties with rare entity categories, models such as the multilingual RoBERTa variant pretrained on a large corpus across many languages achieve a promising balance between precision and recall. These findings emphasize both the challenges and opportunities of using multilingual pretrained models for processing languages with limited resources. Although the multilingual RoBERTa model performed best, other multilingual models yielded comparable results. This suggests that future work exploring more granular annotation schemes may offer deeper insights for Kyrgyz language processing pipelines evaluation.
Problem

Research questions and friction points this paper is trying to address.

Creating first manually annotated NER dataset for Kyrgyz language
Evaluating NER models for rare entity categories in Kyrgyz
Exploring multilingual models for low-resource Kyrgyz language processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

First manually annotated Kyrgyz NER dataset
Evaluated CRF and multilingual transformer models
Multilingual RoBERTa achieved best performance balance
๐Ÿ”Ž Similar Papers
No similar papers found.
T
Timur Turatali
The Cramer Project, Bishkek, Kyrgyzstan
A
Anton Alekseev
PDMI RAS, SPbU, KFU, St. Petersburg/Kazan, Russia; KSTU n.a. I. Razzakov, Bishkek, Kyrgyzstan
G
Gulira Jumalieva
Dep. of Computer Linguistics, KSTU n.a. I. Razzakov, Bishkek, Kyrgyzstan
G
Gulnara Kabaeva
Information Technology Institute, KSTU n.a. I. Razzakov, Bishkek, Kyrgyzstan
Sergey Nikolenko
Sergey Nikolenko
Steklov Institute of Mathematics at St. Petersburg, Russia
Machine LearningTheoretical Computer ScienceNetworking