🤖 AI Summary
Named entity recognition (NER) in low-resource industrial domains—such as manufacturing and operations maintenance—faces severe data scarcity, limiting the applicability of conventional supervised models.
Method: This paper proposes FsPONER, a few-shot prompting optimization framework tailored for domain-specific NER. It is the first work to systematically evaluate large language models’ (LLMs) few-shot generalization capability for NER. FsPONER introduces a novel TF-IDF–based semantic-aware few-shot selection mechanism and integrates random sampling with vector-space similarity matching to construct high-quality, task-informed prompts. The framework is compatible with diverse LLMs, including GPT-4-32K, GPT-3.5-Turbo, LLaMA 2-chat, and Vicuna.
Contribution/Results: On industrial NER benchmarks, the TF-IDF variant of FsPONER achieves 89.2% F1 score—surpassing fine-tuned BERT by approximately 10 percentage points—demonstrating superior efficacy, robustness, and practicality in data-scarce scenarios.
📝 Abstract
Large Language Models (LLMs) have provided a new pathway for Named Entity Recognition (NER) tasks. Compared with fine-tuning, LLM-powered prompting methods avoid the need for training, conserve substantial computational resources, and rely on minimal annotated data. Previous studies have achieved comparable performance to fully supervised BERT-based fine-tuning approaches on general NER benchmarks. However, none of the previous approaches has investigated the efficiency of LLM-based few-shot learning in domain-specific scenarios. To address this gap, we introduce FsPONER, a novel approach for optimizing few-shot prompts, and evaluate its performance on domain-specific NER datasets, with a focus on industrial manufacturing and maintenance, while using multiple LLMs -- GPT-4-32K, GPT-3.5-Turbo, LLaMA 2-chat, and Vicuna. FsPONER consists of three few-shot selection methods based on random sampling, TF-IDF vectors, and a combination of both. We compare these methods with a general-purpose GPT-NER method as the number of few-shot examples increases and evaluate their optimal NER performance against fine-tuned BERT and LLaMA 2-chat. In the considered real-world scenarios with data scarcity, FsPONER with TF-IDF surpasses fine-tuned models by approximately 10% in F1 score.