Discriminative protein sequence modelling with Latent Space Diffusion

📅 2025-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing protein sequence representation learning methods suffer from insufficient discriminative capacity. Method: This paper introduces the first latent-space diffusion framework explicitly designed for discriminative representation learning. It jointly optimizes sequence autoencoding and latent-space denoising diffusion through a homogeneous/heterogeneous dual-path autoencoder architecture, and incorporates a one-parameter family of noise-intensity-controllable representations to enable explicit modulation of representation discriminability. Results: Compared to conventional masked language modeling baselines, our approach significantly improves latent representation performance across multiple protein property prediction tasks. Although it does not surpass the absolute performance of raw pre-trained embeddings, it pioneers the successful integration of diffusion modeling into discriminative protein representation learning—establishing a novel paradigm for controllable and interpretable sequence modeling.

Technology Category

Application Category

📝 Abstract
We explore a framework for protein sequence representation learning that decomposes the task between manifold learning and distributional modelling. Specifically we present a Latent Space Diffusion architecture which combines a protein sequence autoencoder with a denoising diffusion model operating on its latent space. We obtain a one-parameter family of learned representations from the diffusion model, along with the autoencoder's latent representation. We propose and evaluate two autoencoder architectures: a homogeneous model forcing amino acids of the same type to be identically distributed in the latent space, and an inhomogeneous model employing a noise-based variant of masking. As a baseline we take a latent space learned by masked language modelling, and evaluate discriminative capability on a range of protein property prediction tasks. Our finding is twofold: the diffusion models trained on both our proposed variants display higher discriminative power than the one trained on the masked language model baseline, none of the diffusion representations achieve the performance of the masked language model embeddings themselves.
Problem

Research questions and friction points this paper is trying to address.

Learning protein sequence representations via manifold and distributional modeling
Comparing autoencoder architectures for latent space protein modeling
Evaluating discriminative power of diffusion models on protein properties
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent Space Diffusion for protein sequences
Autoencoder with denoising diffusion model
Homogeneous and inhomogeneous autoencoder architectures
🔎 Similar Papers
No similar papers found.
Eoin Quinn
Eoin Quinn
LPTMS, Université Paris-Sud
Strongly correlated electronsmany-body techniquesquantum integrability
G
Ghassene Jebali
InstaDeep Ltd, 42 rue de Paradis, 75010 Paris, France
M
Maxime Seince
InstaDeep Ltd, 42 rue de Paradis, 75010 Paris, France
Oliver Bent
Oliver Bent
InstaDeep
Artificial IntelligenceMachine LearningReinforcement LearningProtein Design