Bidirectional Hierarchical Protein Multi-Modal Representation Learning

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the inefficiency in multimodal fusion arising from the disconnection between protein sequence and 3D structural representations, this paper proposes the Bidirectional Hierarchical Multimodal Fusion framework (Bi-HMF), the first method enabling dynamic, depth-wise collaboration between protein language models (pLMs) and graph neural networks (GNNs). Bi-HMF innovatively integrates local gating mechanisms with global multi-head self-attention to establish cross-layer, bidirectional feature interaction pathways, facilitating complementary enhancement between sequence and structure embeddings. Evaluated on five benchmark tasks—including enzyme classification, model quality assessment, and protein–ligand binding affinity prediction—Bi-HMF consistently outperforms existing state-of-the-art approaches, achieving significant improvements in multimodal protein representation learning.

Technology Category

Application Category

📝 Abstract
Protein representation learning is critical for numerous biological tasks. Recently, large transformer-based protein language models (pLMs) pretrained on large scale protein sequences have demonstrated significant success in sequence-based tasks. However, pLMs lack structural information. Conversely, graph neural networks (GNNs) designed to leverage 3D structural information have shown promising generalization in protein-related prediction tasks, but their effectiveness is often constrained by the scarcity of labeled structural data. Recognizing that sequence and structural representations are complementary perspectives of the same protein entity, we propose a multimodal bidirectional hierarchical fusion framework to effectively merge these modalities. Our framework employs attention and gating mechanisms to enable effective interaction between pLMs-generated sequential representations and GNN-extracted structural features, improving information exchange and enhancement across layers of the neural network. Based on the framework, we further introduce local Bi-Hierarchical Fusion with gating and global Bi-Hierarchical Fusion with multihead self-attention approaches. Through extensive experiments on a diverse set of protein-related tasks, our method demonstrates consistent improvements over strong baselines and existing fusion techniques in a variety of protein representation learning benchmarks, including react (enzyme/EC classification), model quality assessment (MQA), protein-ligand binding affinity prediction (LBA), protein-protein binding site prediction (PPBS), and B cell epitopes prediction (BCEs). Our method establishes a new state-of-the-art for multimodal protein representation learning, emphasizing the efficacy of BIHIERARCHICAL FUSION in bridging sequence and structural modalities.
Problem

Research questions and friction points this paper is trying to address.

Combines protein sequence and structural data for better representation
Integrates transformer-based models with graph neural networks
Improves performance across diverse protein-related prediction tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bidirectional hierarchical fusion framework
Attention and gating mechanisms integration
Local and global hierarchical fusion approaches
X
Xuefeng Liu
Department of Computer Science, University of Chicago, Chicago, IL, USA
S
Songhao Jiang
Department of Computer Science, University of Chicago, Chicago, IL, USA
C
Chih-chan Tien
Department of Computer Science, University of Chicago, Chicago, IL, USA
Jinbo Xu
Jinbo Xu
Professor, Toyota Technological Institute at Chicago
Machine LearningAlgorithm and OptimizationComputational Biology
Rick Stevens
Rick Stevens
Professor of Computer Science, University of Chicago
HPCBioinformaticsDistributed ComputingVisualizationCollaboration