Flow Autoencoders are Effective Protein Tokenizers

πŸ“… 2025-09-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing protein structure tokenization methods rely on handcrafted SE(3)-invariant modules, leading to optimization difficulties and poor scalability. This work introduces Kanziβ€”the first end-to-end trainable protein structure tokenizer based on flow matching. Kanzi abandons local reference frames and SE(3)-invariant attention mechanisms, instead directly consuming global atomic coordinates as input and employing a standard Transformer architecture. It unifies structure reconstruction and generation under a single flow matching loss. Kanzi significantly simplifies the modeling paradigm while improving training stability and scalability. Despite smaller model size and lower computational cost, it achieves superior reconstruction accuracy compared to prior tokenizers. Moreover, discrete token-based generative models built upon Kanzi match the performance of continuous diffusion models. By providing an efficient, general-purpose structural representation, Kanzi establishes a foundation for multimodal protein modeling.

Technology Category

Application Category

πŸ“ Abstract
Protein structure tokenizers enable the creation of multimodal models of protein structure, sequence, and function. Current approaches to protein structure tokenization rely on bespoke components that are invariant to spatial symmetries, but that are challenging to optimize and scale. We present Kanzi, a flow-based tokenizer for tokenization and generation of protein structures. Kanzi consists of a diffusion autoencoder trained with a flow matching loss. We show that this approach simplifies several aspects of protein structure tokenizers: frame-based representations can be replaced with global coordinates, complex losses are replaced with a single flow matching loss, and SE(3)-invariant attention operations can be replaced with standard attention. We find that these changes stabilize the training of parameter-efficient models that outperform existing tokenizers on reconstruction metrics at a fraction of the model size and training cost. An autoregressive model trained with Kanzi outperforms similar generative models that operate over tokens, although it does not yet match the performance of state-of-the-art continuous diffusion models. Code is available here: https://github.com/rdilip/kanzi/.
Problem

Research questions and friction points this paper is trying to address.

Simplifying protein structure tokenization using flow-based autoencoders
Replacing complex components with standard attention and global coordinates
Enabling efficient multimodal protein modeling with reduced training costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Flow-based tokenizer simplifies protein structure representation
Replaces complex losses with single flow matching loss
Uses standard attention instead of SE(3)-invariant operations
πŸ”Ž Similar Papers
No similar papers found.
R
Rohit Dilip
California Institute of Technology
E
Evan Zhang
OpenAI
A
Ayush Varshney
California Institute of Technology
David Van Valen
David Van Valen
California Institute of Technology
Biological PhysicsSystems Biology