CoPeP: Benchmarking Continual Pretraining for Protein Language Models

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of effectively performing continual pretraining on the ever-growing stream of protein sequence data to enhance the performance of protein language models in structure and function prediction tasks. The authors introduce CoPeP, a benchmark comprising a temporally ordered protein dataset constructed from ten years of UniProt releases, and propose the first approach to incorporate temporal metadata into continual pretraining for protein language models. They systematically evaluate strategies such as experience replay, backward learning, and plasticity regulation across 31 protein understanding tasks. Results demonstrate that continual pretraining augmented with temporal metadata reduces perplexity by up to 7% compared to joint training, and multiple continual learning strategies significantly outperform naive continual pretraining.

Technology Category

Application Category

📝 Abstract
Protein language models (pLMs) have recently gained significant attention for their ability to uncover relationships between sequence, structure, and function from evolutionary statistics, thereby accelerating therapeutic drug discovery. These models learn from large protein databases that are continuously updated by the biology community and whose dynamic nature motivates the application of continual learning, not only to keep up with the ever-growing data, but also as an opportunity to take advantage of the temporal meta-information that is created during this process. As a result, we introduce the Continual Pretraining of Protein Language Models (CoPeP) benchmark, a novel benchmark for evaluating continual learning approaches on pLMs. Specifically, we curate a sequence of protein datasets derived from the UniProt Knowledgebase spanning a decade and define metrics to assess pLM performance across 31 protein understanding tasks. We evaluate several methods from the continual learning literature, including replay, unlearning, and plasticity-based methods, some of which have never been applied to models and data of this scale. Our findings reveal that incorporating temporal meta-information improves perplexity by up to 7% even when compared to training on data from all tasks jointly. Moreover, even at scale, several continual learning methods outperform naive continual pretraining. The CoPeP benchmark offers an exciting opportunity to study these methods at scale in an impactful real-world application.
Problem

Research questions and friction points this paper is trying to address.

protein language models
continual pretraining
temporal meta-information
protein understanding tasks
UniProt Knowledgebase
Innovation

Methods, ideas, or system contributions that make the work stand out.

continual pretraining
protein language models
temporal meta-information
CoPeP benchmark
large-scale continual learning
🔎 Similar Papers
No similar papers found.