Long-context Protein Language Modeling Using Bidirectional Mamba with Shared Projection Layers

📅 2024-10-29
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing protein language models (e.g., ESM-2) suffer from limitations in modeling long sequences and integrating protein–protein interaction (PPI) context. Method: We propose BiMamba-S—a bidirectional Mamba architecture incorporating selective state space models (SSMs)—enabling efficient modeling of amino acid contexts up to hundreds of kilobases; and its graph-augmented variant LC-PLM-G, which integrates PPI network structure via graph neural networks (GNNs) during a second-stage pretraining phase. We further introduce shared projection layers and masked language modeling (MLM) to jointly capture long-range dependencies and enhance biological interpretability. Contribution/Results: LC-PLM-G achieves 7–34% improvements over ESM-2 across downstream tasks—including protein structure prediction and functional annotation—while demonstrating significantly enhanced generalization on long sequences and superior neural scaling behavior.

Technology Category

Application Category

📝 Abstract
Self-supervised training of language models (LMs) has seen great success for protein sequences in learning meaningful representations and for generative drug design. Most protein LMs are based on the Transformer architecture trained on individual proteins with short context lengths. Such protein LMs cannot extrapolate to longer proteins and protein complexes well. They also fail to account for the underlying biological mechanisms carried out by biomolecular interactions and dynamics i.e., proteins often interact with other proteins, molecules, and pathways in complex biological systems. In this work, we propose LC-PLM based on an alternative protein LM architecture, BiMamba-S, built upon selective structured state-space models, to learn high-quality universal protein representations at the amino acid token level using masked language modeling. We also introduce its graph-contextual variant, LC-PLM-G, which contextualizes protein-protein interaction (PPI) graphs for a second stage of training. LC-PLM demonstrates favorable neural scaling laws, better length extrapolation capability, and a 7% to 34% improvement on protein downstream tasks than Transformer-based ESM-2. LC-PLM-G further trained within the context of PPI graphs shows promising results on protein structure and function prediction tasks. Our study demonstrates the benefit of increasing the context size with computationally efficient LM architecture (e.g. structured state space models) in learning universal protein representations and incorporating molecular interaction context contained in biological graphs.
Problem

Research questions and friction points this paper is trying to address.

Improving protein language models for longer sequences and complexes
Incorporating protein-protein interactions into representation learning
Enhancing computational efficiency with structured state-space models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bidirectional Mamba with shared projection layers
Graph-contextual protein-protein interaction training
Structured state-space models for efficiency
🔎 Similar Papers
No similar papers found.