🤖 AI Summary
Addressing the challenges of scarce annotated data, high linguistic variability, and pervasive noise in historical text NER, this paper proposes a zero-shot and few-shot prompting framework leveraging large language models (LLMs), requiring no fine-tuning or extensive labeled corpora for robust recognition of person names, locations, and organizations. It presents the first systematic empirical validation of LLMs for historical text NER, focusing on the HIPE-2022 benchmark and explicitly modeling orthographic variants and archaic linguistic noise. Experimental results demonstrate that the method substantially outperforms traditional supervised baselines in zero-shot settings and achieves performance comparable to fully supervised models with only 5–10 annotated examples per entity type. This work establishes a transferable, lightweight, and deployable paradigm for information extraction from low-resource historical documents, advancing practical NER applications in digital humanities and historical linguistics.
📝 Abstract
Large language models have demonstrated remarkable versatility across a wide range of natural language processing tasks and domains. One such task is Named Entity Recognition (NER), which involves identifying and classifying proper names in text, such as people, organizations, locations, dates, and other specific entities. NER plays a crucial role in extracting information from unstructured textual data, enabling downstream applications such as information retrieval from unstructured text.
Traditionally, NER is addressed using supervised machine learning approaches, which require large amounts of annotated training data. However, historical texts present a unique challenge, as the annotated datasets are often scarce or nonexistent, due to the high cost and expertise required for manual labeling. In addition, the variability and noise inherent in historical language, such as inconsistent spelling and archaic vocabulary, further complicate the development of reliable NER systems for these sources.
In this study, we explore the feasibility of applying LLMs to NER in historical documents using zero-shot and few-shot prompting strategies, which require little to no task-specific training data. Our experiments, conducted on the HIPE-2022 (Identifying Historical People, Places and other Entities) dataset, show that LLMs can achieve reasonably strong performance on NER tasks in this setting. While their performance falls short of fully supervised models trained on domain-specific annotations, the results are nevertheless promising. These findings suggest that LLMs offer a viable and efficient alternative for information extraction in low-resource or historically significant corpora, where traditional supervised methods are infeasible.