🤖 AI Summary
To address the high annotation cost for entity recognition in scientific data and the substantial computational overhead of full fine-tuning large language models (LLMs), this paper proposes ALLabel, a three-stage active learning framework. ALLabel synergistically integrates uncertainty sampling, diversity promotion, and representativeness selection to dynamically construct a compact, high-quality demonstration set, enabling retrieval-augmented in-context learning (RAG-ICL) without full model fine-tuning. Its key innovation lies in the deep coupling of multi-strategy active learning with RAG-driven LLM-based entity recognition. Evaluated on three domain-specific scientific datasets, ALLabel achieves performance parity with full supervision using only 5%–10% labeled samples—significantly outperforming diverse baselines. The framework demonstrates strong generalizability and extensibility, establishing an efficient, low-cost paradigm for entity recognition in low-resource scientific text understanding.
📝 Abstract
Many contemporary data-driven research efforts in the natural sciences, such as chemistry and materials science, require large-scale, high-performance entity recognition from scientific datasets. Large language models (LLMs) have increasingly been adopted to solve the entity recognition task, with the same trend being observed on all-spectrum NLP tasks. The prevailing entity recognition LLMs rely on fine-tuned technology, yet the fine-tuning process often incurs significant cost. To achieve a best performance-cost trade-off, we propose ALLabel, a three-stage framework designed to select the most informative and representative samples in preparing the demonstrations for LLM modeling. The annotated examples are used to construct a ground-truth retrieval corpus for LLM in-context learning. By sequentially employing three distinct active learning strategies, ALLabel consistently outperforms all baselines under the same annotation budget across three specialized domain datasets. Experimental results also demonstrate that selectively annotating only 5%-10% of the dataset with ALLabel can achieve performance comparable to the method annotating the entire dataset. Further analyses and ablation studies verify the effectiveness and generalizability of our proposal.