🤖 AI Summary
This study addresses the computational identification of religious polemical language in Tudor-era English Reformation texts. Confronting the dual challenges of scarce historical annotations and diachronic language shift, we introduce InviTE—the first expert-annotated corpus of nearly 2,000 Early Modern English sentences—and propose an iterative, history-aware annotation protocol integrating text preprocessing, manual refinement, and model-driven candidate selection. Methodologically, we systematically evaluate pretraining strategies for polemic detection: fine-tuned BERT models significantly outperform both general-purpose large language models (LLMs) and zero-shot prompting; critically, domain-adapted pretraining on historical texts yields greater gains than generic LLMs. Our contributions include a reusable annotated corpus, a reproducible annotation framework, and a rigorous evaluation benchmark—advancing historical semantic computation and enabling robust, context-sensitive analysis of religious discourse in early modern England.
📝 Abstract
In this paper, we aim at the application of Natural Language Processing (NLP) techniques to historical research endeavors, particularly addressing the study of religious invectives in the context of the Protestant Reformation in Tudor England. We outline a workflow spanning from raw data, through pre-processing and data selection, to an iterative annotation process. As a result, we introduce the InviTE corpus -- a corpus of almost 2000 Early Modern English (EModE) sentences, which are enriched with expert annotations regarding invective language throughout 16th-century England. Subsequently, we assess and compare the performance of fine-tuned BERT-based models and zero-shot prompted instruction-tuned large language models (LLMs), which highlights the superiority of models pre-trained on historical data and fine-tuned to invective detection.