๐ค AI Summary
This work addresses the underexplored problem of continuous diffusion modeling for protein sequence design. We propose DiMA, the first general-purpose continuous diffusion framework that directly generates sequences in the latent space of pretrained protein language models (PLMs) such as ESM-2 and SaProt. DiMA establishes the first unified continuous diffusion paradigm across PLMs spanning 8M to 3B parameters, enabling consistent training for sequence-only, dual-decoding, and multimodal representations. It flexibly supports conditional design tasksโincluding protein family generation, motif scaffolding, gap filling, and fold-specific design. Experiments demonstrate that DiMA consistently outperforms autoregressive, discrete diffusion, and flow-matching baselines across all key metrics: generation quality, diversity, novelty, and distribution fidelity. Notably, it achieves state-of-the-art performance on two distinct protein modalities.
๐ Abstract
Protein sequence design has seen significant advances through discrete diffusion and autoregressive approaches, yet the potential of continuous diffusion remains underexplored. Here, we present DiMA, a latent diffusion framework that operates on protein language model representations. Through systematic exploration of architectural choices and diffusion components, we develop a robust methodology that generalizes across multiple protein encoders ranging from 8M to 3B parameters. We demonstrate that our framework achieves consistently high performance across sequence-only (ESM-2, ESMc), dual-decodable (CHEAP), and multimodal (SaProt) representations using the same architecture and training approach. We extensively evaluate existing methods alongside DiMA using multiple metrics across two protein modalities, covering quality, diversity, novelty, and distribution matching of generated proteins. DiMA consistently produces novel, high-quality and diverse protein sequences and achieves strong results compared to baselines such as autoregressive, discrete diffusion and flow matching language models. The model demonstrates versatile functionality, supporting conditional generation tasks including protein family-generation, motif scaffolding and infilling, and fold-specific sequence design. This work provides a universal continuous diffusion framework for protein sequence generation, offering both architectural insights and practical applicability across various protein design scenarios.