🤖 AI Summary
Existing protein pretraining models underutilize protein–text matching data and lack a function-oriented pretraining paradigm, resulting in substantially weaker functional understanding compared to multimodal vision models.
Method: We introduce ProtAnno—the first large-scale protein–text paired dataset—and propose a function-oriented multimodal pretraining framework. It features (i) a novel fine-grained, function-subcategorized segment-level modeling objective; (ii) a confidence- and attribute-coverage–driven noise-robust sampling strategy; and (iii) fine-grained cross-modal alignment between protein sequences and biomedical semantics. The method integrates contrastive learning (CLIP-style architecture), functional region segmentation, confidence-weighted sampling, and multi-task joint optimization.
Contribution/Results: Our approach achieves state-of-the-art performance across 22 benchmark tasks, with an average 75% improvement in cross-modal transferability and gains of 59.9% and 39.7% in GO cellular component (GO-CC) and biological process (GO-BP) function prediction, respectively.
📝 Abstract
Multi-modality pre-training paradigm that aligns protein sequences and biological descriptions has learned general protein representations and achieved promising performance in various downstream applications. However, these works were still unable to replicate the extraordinary success of language-supervised visual foundation models due to the ineffective usage of aligned protein-text paired data and the lack of an effective function-informed pre-training paradigm. To address these issues, this paper curates a large-scale protein-text paired dataset called ProtAnno with a property-driven sampling strategy, and introduces a novel function-informed protein pre-training paradigm. Specifically, the sampling strategy determines selecting probability based on the sample confidence and property coverage, balancing the data quality and data quantity in face of large-scale noisy data. Furthermore, motivated by significance of the protein specific functional mechanism, the proposed paradigm explicitly model protein static and dynamic functional segments by two segment-wise pre-training objectives, injecting fine-grained information in a function-informed manner. Leveraging all these innovations, we develop ProtCLIP, a multi-modality foundation model that comprehensively represents function-aware protein embeddings. On 22 different protein benchmarks within 5 types, including protein functionality classification, mutation effect prediction, cross-modal transformation, semantic similarity inference and protein-protein interaction prediction, our ProtCLIP consistently achieves SOTA performance, with remarkable improvements of 75% on average in five cross-modal transformation benchmarks, 59.9% in GO-CC and 39.7% in GO-BP protein function prediction. The experimental results verify the extraordinary potential of ProtCLIP serving as the protein multi-modality foundation model.