What Matters When Building Universal Multilingual Named Entity Recognition Models?

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of systematic evaluation of key design choices in multilingual named entity recognition (NER) models, which has obscured the true contributions of individual components to overall performance. The study presents the first comprehensive disentangled analysis across multiple dimensions—including model architecture, multilingual Transformer backbones, training objectives, and data composition—supported by large-scale ablation experiments. Based on these insights, the authors develop Otter, an efficient and general-purpose NER model supporting over 100 languages. Otter achieves a 5.3-point F1 improvement over GLiNER-x-base and matches the performance of much larger models such as Qwen3-32B, while offering significantly higher inference efficiency.

Technology Category

Application Category

📝 Abstract
Recent progress in universal multilingual named entity recognition (NER) has been driven by advances in multilingual transformer models and task-specific architectures, loss functions, and training datasets. Despite substantial prior work, we find that many critical design decisions for such models are made without systematic justification, with architectural components, training objectives, and data sources evaluated only in combination rather than in isolation. We argue that these decisions impede progress in the field by making it difficult to identify which choices improve model performance. In this work, we conduct extensive experiments around architectures, transformer backbones, training objectives, and data composition across a wide range of languages. Based on these insights, we introduce Otter, a universal multilingual NER model supporting over 100 languages. Otter achieves consistent improvements over strong multilingual NER baselines, outperforming GLiNER-x-base by 5.3pp in F1 and achieves competitive performance compared to large generative models such as Qwen3-32B, while being substantially more efficient. We release model checkpoints, training and evaluation code to facilitate reproducibility and future research.
Problem

Research questions and friction points this paper is trying to address.

multilingual NER
model design
systematic evaluation
named entity recognition
universal models
Innovation

Methods, ideas, or system contributions that make the work stand out.

multilingual NER
systematic ablation study
universal NER model
efficient NER architecture
cross-lingual transfer
🔎 Similar Papers
No similar papers found.