SaDiT: Efficient Protein Backbone Design via Latent Structural Tokenization and Diffusion Transformers

πŸ“… 2026-02-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the high computational cost and low sampling efficiency of existing diffusion-based protein backbone generation methods, which hinder large-scale structural exploration. The authors propose SaDiT, a novel framework that, for the first time, integrates SaProt’s structural tokenization with a Diffusion Transformer (DiT) to model protein geometry in a discrete latent space while preserving SE(3) equivariance. By introducing an IPA Token Cache mechanism to optimize attention computation, SaDiT significantly accelerates generation speed. In both unconditional and fold-class-conditional generation tasks, SaDiT outperforms state-of-the-art models such as RFDiffusion and Proteina in terms of computational efficiency and structural designability.

Technology Category

Application Category

πŸ“ Abstract
Generative models for de novo protein backbone design have achieved remarkable success in creating novel protein structures. However, these diffusion-based approaches remain computationally intensive and slower than desired for large-scale structural exploration. While recent efforts like Proteina have introduced flow-matching to improve sampling efficiency, the potential of tokenization for structural compression and acceleration remains largely unexplored in the protein domain. In this work, we present SaDiT, a novel framework that accelerates protein backbone generation by integrating SaProt Tokenization with a Diffusion Transformer (DiT) architecture. SaDiT leverages a discrete latent space to represent protein geometry, significantly reducing the complexity of the generation process while maintaining theoretical SE(3) equivalence. To further enhance efficiency, we introduce an IPA Token Cache mechanism that optimizes the Invariant Point Attention (IPA) layers by reusing computed token states during iterative sampling. Experimental results demonstrate that SaDiT outperforms state-of-the-art models, including RFDiffusion and Proteina, in both computational speed and structural viability. We evaluate our model across unconditional backbone generation and fold-class conditional generation tasks, where SaDiT shows superior ability to capture complex topological features with high designability.
Problem

Research questions and friction points this paper is trying to address.

protein backbone design
computational efficiency
structural tokenization
diffusion models
generative modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Latent Structural Tokenization
Diffusion Transformer
IPA Token Cache
Protein Backbone Design
SE(3) Equivariance
πŸ”Ž Similar Papers
No similar papers found.