🤖 AI Summary
Contemporary large language models (LLMs) lack robust temporal grounding due to the absence of long-term diachronic structure in their training corpora, limiting their ability to model semantic evolution and historical linguistic norms. Method: We introduce CHRONOBERG—a novel, temporally fine-grained English book corpus spanning 250 years, built from Project Gutenberg—and propose a time-sensitive Valence-Arousal-Dominance (VAD) sentiment analysis framework alongside a historically calibrated lexicon. We conduct temporal pretraining and cross-era evaluation to assess model performance across historical periods. Contribution/Results: Empirical analysis reveals systematic deficiencies in mainstream LLMs regarding semantic drift modeling, identification of historically discriminatory language, and diachronic sentiment understanding. This work provides the first scalable, time-structured corpus and associated analytical tools for historical NLP, while advancing the development of time-aware model training paradigms and evaluation benchmarks.
📝 Abstract
Large language models (LLMs) excel at operating at scale by leveraging social media and various data crawled from the web. Whereas existing corpora are diverse, their frequent lack of long-term temporal structure may however limit an LLM's ability to contextualize semantic and normative evolution of language and to capture diachronic variation. To support analysis and training for the latter, we introduce CHRONOBERG, a temporally structured corpus of English book texts spanning 250 years, curated from Project Gutenberg and enriched with a variety of temporal annotations. First, the edited nature of books enables us to quantify lexical semantic change through time-sensitive Valence-Arousal-Dominance (VAD) analysis and to construct historically calibrated affective lexicons to support temporally grounded interpretation. With the lexicons at hand, we demonstrate a need for modern LLM-based tools to better situate their detection of discriminatory language and contextualization of sentiment across various time-periods. In fact, we show how language models trained sequentially on CHRONOBERG struggle to encode diachronic shifts in meaning, emphasizing the need for temporally aware training and evaluation pipelines, and positioning CHRONOBERG as a scalable resource for the study of linguistic change and temporal generalization. Disclaimer: This paper includes language and display of samples that could be offensive to readers. Open Access: Chronoberg is available publicly on HuggingFace at ( https://huggingface.co/datasets/spaul25/Chronoberg). Code is available at (https://github.com/paulsubarna/Chronoberg).