Diffusion on language model encodings for protein sequence generation

๐Ÿ“… 2024-03-06
๐Ÿ“ˆ Citations: 5
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the underexplored problem of continuous diffusion modeling for protein sequence design. We propose DiMA, the first general-purpose continuous diffusion framework that directly generates sequences in the latent space of pretrained protein language models (PLMs) such as ESM-2 and SaProt. DiMA establishes the first unified continuous diffusion paradigm across PLMs spanning 8M to 3B parameters, enabling consistent training for sequence-only, dual-decoding, and multimodal representations. It flexibly supports conditional design tasksโ€”including protein family generation, motif scaffolding, gap filling, and fold-specific design. Experiments demonstrate that DiMA consistently outperforms autoregressive, discrete diffusion, and flow-matching baselines across all key metrics: generation quality, diversity, novelty, and distribution fidelity. Notably, it achieves state-of-the-art performance on two distinct protein modalities.

Technology Category

Application Category

๐Ÿ“ Abstract
Protein sequence design has seen significant advances through discrete diffusion and autoregressive approaches, yet the potential of continuous diffusion remains underexplored. Here, we present DiMA, a latent diffusion framework that operates on protein language model representations. Through systematic exploration of architectural choices and diffusion components, we develop a robust methodology that generalizes across multiple protein encoders ranging from 8M to 3B parameters. We demonstrate that our framework achieves consistently high performance across sequence-only (ESM-2, ESMc), dual-decodable (CHEAP), and multimodal (SaProt) representations using the same architecture and training approach. We extensively evaluate existing methods alongside DiMA using multiple metrics across two protein modalities, covering quality, diversity, novelty, and distribution matching of generated proteins. DiMA consistently produces novel, high-quality and diverse protein sequences and achieves strong results compared to baselines such as autoregressive, discrete diffusion and flow matching language models. The model demonstrates versatile functionality, supporting conditional generation tasks including protein family-generation, motif scaffolding and infilling, and fold-specific sequence design. This work provides a universal continuous diffusion framework for protein sequence generation, offering both architectural insights and practical applicability across various protein design scenarios.
Problem

Research questions and friction points this paper is trying to address.

Explores continuous diffusion for protein sequence design.
Develops DiMA framework for protein language models.
Evaluates DiMA across diverse protein generation tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continuous diffusion on encodings
Latent diffusion framework DiMA
Versatile conditional generation tasks
V
Viacheslav Meshchaninov
Moscow State University, Moscow, Russia
P
Pavel V. Strashnov
AIRI, Moscow, Russia
A
Andrey Shevtsov
AIRI, Moscow, Russia
Fedor Nikolaev
Fedor Nikolaev
AIRI, Moscow, Russia
N
N. Ivanisenko
AIRI, Moscow, Russia
O
O. Kardymon
AIRI, Moscow, Russia
Dmitry Vetrov
Dmitry Vetrov
Professor of Computer Science, Constructor University
Deep learningBayesian inferenceGraphical models