ProtCLIP: Function-Informed Protein Multi-Modal Learning

📅 2024-12-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing protein pretraining models underutilize protein–text matching data and lack a function-oriented pretraining paradigm, resulting in substantially weaker functional understanding compared to multimodal vision models. Method: We introduce ProtAnno—the first large-scale protein–text paired dataset—and propose a function-oriented multimodal pretraining framework. It features (i) a novel fine-grained, function-subcategorized segment-level modeling objective; (ii) a confidence- and attribute-coverage–driven noise-robust sampling strategy; and (iii) fine-grained cross-modal alignment between protein sequences and biomedical semantics. The method integrates contrastive learning (CLIP-style architecture), functional region segmentation, confidence-weighted sampling, and multi-task joint optimization. Contribution/Results: Our approach achieves state-of-the-art performance across 22 benchmark tasks, with an average 75% improvement in cross-modal transferability and gains of 59.9% and 39.7% in GO cellular component (GO-CC) and biological process (GO-BP) function prediction, respectively.

Technology Category

Application Category

📝 Abstract
Multi-modality pre-training paradigm that aligns protein sequences and biological descriptions has learned general protein representations and achieved promising performance in various downstream applications. However, these works were still unable to replicate the extraordinary success of language-supervised visual foundation models due to the ineffective usage of aligned protein-text paired data and the lack of an effective function-informed pre-training paradigm. To address these issues, this paper curates a large-scale protein-text paired dataset called ProtAnno with a property-driven sampling strategy, and introduces a novel function-informed protein pre-training paradigm. Specifically, the sampling strategy determines selecting probability based on the sample confidence and property coverage, balancing the data quality and data quantity in face of large-scale noisy data. Furthermore, motivated by significance of the protein specific functional mechanism, the proposed paradigm explicitly model protein static and dynamic functional segments by two segment-wise pre-training objectives, injecting fine-grained information in a function-informed manner. Leveraging all these innovations, we develop ProtCLIP, a multi-modality foundation model that comprehensively represents function-aware protein embeddings. On 22 different protein benchmarks within 5 types, including protein functionality classification, mutation effect prediction, cross-modal transformation, semantic similarity inference and protein-protein interaction prediction, our ProtCLIP consistently achieves SOTA performance, with remarkable improvements of 75% on average in five cross-modal transformation benchmarks, 59.9% in GO-CC and 39.7% in GO-BP protein function prediction. The experimental results verify the extraordinary potential of ProtCLIP serving as the protein multi-modality foundation model.
Problem

Research questions and friction points this paper is trying to address.

Protein Pre-training
Text-Protein Alignment
Functional Guidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

ProtCLIP
multi-modal learning
protein function prediction
🔎 Similar Papers
No similar papers found.
H
Hanjing Zhou
College of Computer Science and Technology, Zhejiang University
Mingze Yin
Mingze Yin
Zhejiang University
Deep LearningAI for ScienceComputer Vision
W
Wei Wu
School of Artificial Intelligence and Data Science, University of Science and Technology of China
M
Mingyang Li
Alibaba Cloud Computing
K
Kun Fu
Alibaba Cloud Computing
Jintai Chen
Jintai Chen
Assistant Professor@HKUST(GZ)
AI for HealthcareMultimodal LearningDeep Tabular Learning
J
Jian Wu
State Key Laboratory of Transvascular Implantation Devices of The Second Affiliated Hospital, School of Medicine, Zhejiang University; Zhejiang Key Laboratory of Medical Imaging Artificial Intelligence
Z
Zheng Wang
Alibaba Cloud Computing