ALLabel: Three-stage Active Learning for LLM-based Entity Recognition using Demonstration Retrieval

📅 2025-09-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high annotation cost for entity recognition in scientific data and the substantial computational overhead of full fine-tuning large language models (LLMs), this paper proposes ALLabel, a three-stage active learning framework. ALLabel synergistically integrates uncertainty sampling, diversity promotion, and representativeness selection to dynamically construct a compact, high-quality demonstration set, enabling retrieval-augmented in-context learning (RAG-ICL) without full model fine-tuning. Its key innovation lies in the deep coupling of multi-strategy active learning with RAG-driven LLM-based entity recognition. Evaluated on three domain-specific scientific datasets, ALLabel achieves performance parity with full supervision using only 5%–10% labeled samples—significantly outperforming diverse baselines. The framework demonstrates strong generalizability and extensibility, establishing an efficient, low-cost paradigm for entity recognition in low-resource scientific text understanding.

Technology Category

Application Category

📝 Abstract
Many contemporary data-driven research efforts in the natural sciences, such as chemistry and materials science, require large-scale, high-performance entity recognition from scientific datasets. Large language models (LLMs) have increasingly been adopted to solve the entity recognition task, with the same trend being observed on all-spectrum NLP tasks. The prevailing entity recognition LLMs rely on fine-tuned technology, yet the fine-tuning process often incurs significant cost. To achieve a best performance-cost trade-off, we propose ALLabel, a three-stage framework designed to select the most informative and representative samples in preparing the demonstrations for LLM modeling. The annotated examples are used to construct a ground-truth retrieval corpus for LLM in-context learning. By sequentially employing three distinct active learning strategies, ALLabel consistently outperforms all baselines under the same annotation budget across three specialized domain datasets. Experimental results also demonstrate that selectively annotating only 5%-10% of the dataset with ALLabel can achieve performance comparable to the method annotating the entire dataset. Further analyses and ablation studies verify the effectiveness and generalizability of our proposal.
Problem

Research questions and friction points this paper is trying to address.

Reducing annotation costs for LLM-based entity recognition
Selecting informative samples for demonstration retrieval
Achieving high performance with minimal labeled data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Three-stage active learning framework
Demonstration retrieval for LLM
Selective annotation reduces cost
🔎 Similar Papers
No similar papers found.
Z
Zihan Chen
Beihang University
L
Lei Shi
Beihang University
W
Weize Wu
Beihang University
Qiji Zhou
Qiji Zhou
Westlake University
Natural Language ProcessingComputational LinguisticsLogicMultimodal Models
Y
Yue Zhang
Westlake University