Structure-Aligned Protein Language Model

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing protein language models (pLMs) lack explicit modeling of three-dimensional structural knowledge, limiting their performance on structure-related tasks. To address this, we propose a dual-task structural alignment framework: (1) latent-space contrastive learning to align residue-level representations across proteins, and (2) a physics-informed structural token prediction task that jointly leverages intra- and inter-protein structural information. We further introduce a small-model-guided adaptive residue loss filtering mechanism to significantly enhance robustness against noisy structural inputs, and integrate graph neural network–language model joint distillation. Evaluated on ESM2 and AMPLIFY baselines, our method improves contact prediction accuracy by 12.7%. We publicly release the optimized models—SaESM2 and SaAMPLIFY—along with all training and inference code.

Technology Category

Application Category

📝 Abstract
Protein language models (pLMs) pre-trained on vast protein sequence databases excel at various downstream tasks but lack the structural knowledge essential for many biological applications. To address this, we integrate structural insights from pre-trained protein graph neural networks (pGNNs) into pLMs through a latent-level contrastive learning task. This task aligns residue representations from pLMs with those from pGNNs across multiple proteins, enriching pLMs with inter-protein structural knowledge. Additionally, we incorporate a physical-level task that infuses intra-protein structural knowledge by optimizing pLMs to predict structural tokens. The proposed dual-task framework effectively incorporates both inter-protein and intra-protein structural knowledge into pLMs. Given the variability in the quality of protein structures in PDB, we further introduce a residue loss selection module, which uses a small model trained on high-quality structures to select reliable yet challenging residue losses for the pLM to learn. Applying our structure alignment method to the state-of-the-art ESM2 and AMPLIFY results in notable performance gains across a wide range of tasks, including a 12.7% increase in ESM2 contact prediction. The data, code, and resulting SaESM2 and SaAMPLIFY models will be released on Hugging Face.
Problem

Research questions and friction points this paper is trying to address.

Integrate structural knowledge into protein language models (pLMs)
Align residue representations between pLMs and protein graph neural networks (pGNNs)
Improve pLM performance in biological tasks via dual-task framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates pGNN structural insights via contrastive learning
Predicts structural tokens for intra-protein knowledge
Uses residue loss selection for reliable learning
🔎 Similar Papers
No similar papers found.
C
Can Chen
Mila – Quebec AI Institute, Université de Montréal
D
David Heurtel-Depeiges
Mila – Quebec AI Institute, Chandar Research Lab, Polytechnique Montréal
R
Robert M. Vernon
Amgen
C
Christopher James Langmead
Amgen
Yoshua Bengio
Yoshua Bengio
Professor of computer science, University of Montreal, Mila, IVADO, CIFAR
Machine learningdeep learningartificial intelligence
Quentin Fournier
Quentin Fournier
Research Fellow at Mila - Quebec AI Institute
Deep LearningNatural Language ProcessingDrug Discovery