ProteinAE: Protein Diffusion Autoencoders for Structure Encoding

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing protein structure representations struggle to simultaneously satisfy SE(3) geometric constraints, avoid discretization artifacts, and eliminate the need for multi-objective training. Method: This paper proposes an end-to-end, single-objective optimization framework—Diffusion Transformer (DiffT)—that employs a diffusion-based autoencoder to directly map atomic coordinates into a compact, continuous latent space. Latent-space modeling is achieved via flow matching with a bottleneck architecture, obviating explicit equivariance constraints. Contribution/Results: Experiments demonstrate state-of-the-art structural reconstruction fidelity. In generative tasks, the latent-variable model surpasses mainstream latent-space approaches and matches the performance of leading all-atom generative models, while significantly simplifying training and enhancing generalization.

Technology Category

Application Category

📝 Abstract
Developing effective representations of protein structures is essential for advancing protein science, particularly for protein generative modeling. Current approaches often grapple with the complexities of the SE(3) manifold, rely on discrete tokenization, or the need for multiple training objectives, all of which can hinder the model optimization and generalization. We introduce ProteinAE, a novel and streamlined protein diffusion autoencoder designed to overcome these challenges by directly mapping protein backbone coordinates from E(3) into a continuous, compact latent space. ProteinAE employs a non-equivariant Diffusion Transformer with a bottleneck design for efficient compression and is trained end-to-end with a single flow matching objective, substantially simplifying the optimization pipeline. We demonstrate that ProteinAE achieves state-of-the-art reconstruction quality, outperforming existing autoencoders. The resulting latent space serves as a powerful foundation for a latent diffusion model that bypasses the need for explicit equivariance. This enables efficient, high-quality structure generation that is competitive with leading structure-based approaches and significantly outperforms prior latent-based methods. Code is available at https://github.com/OnlyLoveKFC/ProteinAE_v1.
Problem

Research questions and friction points this paper is trying to address.

Developing effective protein structure representations for generative modeling
Overcoming SE(3) manifold complexities and discrete tokenization limitations
Creating continuous latent space for efficient protein structure generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-equivariant Diffusion Transformer for protein encoding
End-to-end training with single flow matching objective
Continuous latent space enabling competitive structure generation