Nested Named Entity Recognition as Single-Pass Sequence Labeling

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of high computational complexity and structural intricacy in nested named entity recognition (NNER). We propose a single-pass sequence labeling approach that linearizes nested entity structures—represented as constituent trees—into token-level label sequences, thereby achieving the first complete reduction of NNER to standard token classification. Unlike prior methods, our approach eliminates the need for span enumeration, hierarchical decoding, or graph-based modeling; instead, it relies solely on a pretrained encoder (e.g., BERT) and a lightweight linearization strategy, ensuring seamless integration with mainstream sequence labeling frameworks. Crucially, it preserves expressive power while reducing inference time complexity to *O(n)*, significantly improving training and deployment efficiency. Extensive experiments demonstrate state-of-the-art performance across multiple benchmarks, validating the method’s effectiveness, simplicity, and strong generalization capability.

Technology Category

Application Category

📝 Abstract
We cast nested named entity recognition (NNER) as a sequence labeling task by leveraging prior work that linearizes constituency structures, effectively reducing the complexity of this structured prediction problem to straightforward token classification. By combining these constituency linearizations with pretrained encoders, our method captures nested entities while performing exactly $n$ tagging actions. Our approach achieves competitive performance compared to less efficient systems, and it can be trained using any off-the-shelf sequence labeling library.
Problem

Research questions and friction points this paper is trying to address.

Reduces nested NER to sequence labeling
Uses constituency linearizations with pretrained encoders
Achieves competitive performance efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linearizes constituency structures for sequence labeling
Combines linearizations with pretrained encoders
Achieves competitive performance efficiently
🔎 Similar Papers
No similar papers found.
A
Alberto Munoz-Ortiz
Universidade da Coruña, CITIC, Spain
David Vilares
David Vilares
Universidade da Coruña, CITIC
Natural Language Processing
Caio Corro
Caio Corro
INSA Rennes, IRISA
Natural Language ProcessingStructured Prediction
C
Carlos G'omez-Rodr'iguez
Universidade da Coruña, CITIC, Spain