myNER: Contextualized Burmese Named Entity Recognition with Bidirectional LSTM and fastText Embeddings via Joint Training with POS Tagging

📅 2025-04-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the low-resource challenge in Burmese named entity recognition (NER) caused by scarce annotated data. We introduce myNER, the first word-level Burmese NER corpus annotated with part-of-speech (POS) tags and seven entity types, filling a critical gap in Burmese NER resources. Methodologically, we propose a novel multi-task joint labeling framework that integrates POS information into Burmese NER modeling for the first time, and systematically evaluate the benefits of contextualized word embeddings and joint training. Experiments employ both CRF and BiLSTM-CRF architectures, leveraging fine-tuned fastText embeddings. Under single-task and POS-augmented settings, our models achieve state-of-the-art performance: CRF+fastText attains 98.18% accuracy and 98.11% weighted F1-score; BiLSTM-CRF achieves 97.91% accuracy and 97.76% weighted F1-score. These results significantly advance low-resource NER research for Burmese.

Technology Category

Application Category

📝 Abstract
Named Entity Recognition (NER) involves identifying and categorizing named entities within textual data. Despite its significance, NER research has often overlooked low-resource languages like Myanmar (Burmese), primarily due to the lack of publicly available annotated datasets. To address this, we introduce myNER, a novel word-level NER corpus featuring a 7-tag annotation scheme, enriched with Part-of-Speech (POS) tagging to provide additional syntactic information. Alongside the corpus, we conduct a comprehensive evaluation of NER models, including Conditional Random Fields (CRF), Bidirectional LSTM (BiLSTM)-CRF, and their combinations with fastText embeddings in different settings. Our experiments reveal the effectiveness of contextualized word embeddings and the impact of joint training with POS tagging, demonstrating significant performance improvements across models. The traditional CRF joint-task model with fastText embeddings as a feature achieved the best result, with a 0.9818 accuracy and 0.9811 weighted F1 score with 0.7429 macro F1 score. BiLSTM-CRF with fine-tuned fastText embeddings gets the best result of 0.9791 accuracy and 0.9776 weighted F1 score with 0.7395 macro F1 score.
Problem

Research questions and friction points this paper is trying to address.

Addressing lack of NER resources for Burmese language
Evaluating NER models with fastText embeddings and POS tagging
Improving NER performance via joint training with syntactic information
Innovation

Methods, ideas, or system contributions that make the work stand out.

BiLSTM-CRF model with fastText embeddings
Joint training with POS tagging
Contextualized word embeddings for Burmese NER
🔎 Similar Papers
No similar papers found.
K
Kaung Lwin Thant
VMES, Assumption University, Bangkok, Thailand
K
Kwankamol Nongpong
VMES, Assumption University, Bangkok, Thailand
Ye Kyaw Thu
Ye Kyaw Thu
LST Lab., NECTEC (Thai), NLP Research Lab, UTYCC (Myanmar), Language Understanding Lab., (Myanmar)
Natural Language ProcessingMachine TranslationSpeech ProcessingAI
T
Thura Aung
School of Engineering, KMITL, Bangkok, Thailand
K
Khaing Hsu Wai
Graduate School of Engineering Science, Akita University, Akita, Japan
T
Thazin Myint Oo
LU. Laboratory, Yangon, Myanmar