Towards understanding evolution of science through language model series

📅 2024-09-15
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of modeling temporal evolution in scientific text. To this end, we propose the AnnualBERT family of models: RoBERTa-based architectures employing whole-word tokenization and a year-by-year incremental pretraining paradigm from scratch, trained on 1.7 million arXiv papers (up to 2008) to construct both a foundational base model and annually updated variants. For the first time, language models are explicitly aligned with the temporal granularity of scientific advancement—departing from subword tokenization and monolithic model paradigms—and we design quantifiable probing tasks to analyze dynamic concept representation shifts and knowledge forgetting. AnnualBERT achieves state-of-the-art performance on domain-specific NLP and arXiv citation link prediction tasks; matches mainstream models on general NLP benchmarks; and reveals empirically grounded patterns of semantic decay and reconstruction in scientific concepts over time.

Technology Category

Application Category

📝 Abstract
We introduce AnnualBERT, a series of language models designed specifically to capture the temporal evolution of scientific text. Deviating from the prevailing paradigms of subword tokenizations and"one model to rule them all", AnnualBERT adopts whole words as tokens and is composed of a base RoBERTa model pretrained from scratch on the full-text of 1.7 million arXiv papers published until 2008 and a collection of progressively trained models on arXiv papers at an annual basis. We demonstrate the effectiveness of AnnualBERT models by showing that they not only have comparable performances in standard tasks but also achieve state-of-the-art performances on domain-specific NLP tasks as well as link prediction tasks in the arXiv citation network. We then utilize probing tasks to quantify the models' behavior in terms of representation learning and forgetting as time progresses. Our approach enables the pretrained models to not only improve performances on scientific text processing tasks but also to provide insights into the development of scientific discourse over time. The series of the models is available at https://huggingface.co/jd445/AnnualBERTs.
Problem

Research questions and friction points this paper is trying to address.

Capturing temporal evolution of scientific text
Improving domain-specific NLP task performance
Analyzing scientific discourse development over time
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses whole words as tokens instead of subwords
Base model pretrained on 1.7 million arXiv papers
Progressively trained annually for temporal evolution
🔎 Similar Papers
No similar papers found.
J
Junjie Dong
Department of Data Science, College of Computing, City University of Hong Kong, Hong Kong, China
Z
Zhuoqi Lyu
Department of Data Science, College of Computing, City University of Hong Kong, Hong Kong, China
Qing Ke
Qing Ke
City University of Hong Kong
Data ScienceInnovationComplex SystemsCheminformatics