🤖 AI Summary
This work addresses the low-resource challenge in Burmese named entity recognition (NER) caused by scarce annotated data. We introduce myNER, the first word-level Burmese NER corpus annotated with part-of-speech (POS) tags and seven entity types, filling a critical gap in Burmese NER resources. Methodologically, we propose a novel multi-task joint labeling framework that integrates POS information into Burmese NER modeling for the first time, and systematically evaluate the benefits of contextualized word embeddings and joint training. Experiments employ both CRF and BiLSTM-CRF architectures, leveraging fine-tuned fastText embeddings. Under single-task and POS-augmented settings, our models achieve state-of-the-art performance: CRF+fastText attains 98.18% accuracy and 98.11% weighted F1-score; BiLSTM-CRF achieves 97.91% accuracy and 97.76% weighted F1-score. These results significantly advance low-resource NER research for Burmese.
📝 Abstract
Named Entity Recognition (NER) involves identifying and categorizing named entities within textual data. Despite its significance, NER research has often overlooked low-resource languages like Myanmar (Burmese), primarily due to the lack of publicly available annotated datasets. To address this, we introduce myNER, a novel word-level NER corpus featuring a 7-tag annotation scheme, enriched with Part-of-Speech (POS) tagging to provide additional syntactic information. Alongside the corpus, we conduct a comprehensive evaluation of NER models, including Conditional Random Fields (CRF), Bidirectional LSTM (BiLSTM)-CRF, and their combinations with fastText embeddings in different settings. Our experiments reveal the effectiveness of contextualized word embeddings and the impact of joint training with POS tagging, demonstrating significant performance improvements across models. The traditional CRF joint-task model with fastText embeddings as a feature achieved the best result, with a 0.9818 accuracy and 0.9811 weighted F1 score with 0.7429 macro F1 score. BiLSTM-CRF with fine-tuned fastText embeddings gets the best result of 0.9791 accuracy and 0.9776 weighted F1 score with 0.7395 macro F1 score.