SimpleFold: Folding Proteins is Simpler than You Think

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Protein folding models often rely on complex domain-specific architectures—such as trigonometric updates and explicit pairwise representations—whose necessity remains questionable. This work introduces SimpleFold, the first pure Transformer-based protein folding model grounded in flow matching, eliminating all domain-specific components. It employs only standard Transformer blocks, adaptive layer normalization, and a structure-enhanced generative training objective, augmented by knowledge distillation and experimental PDB data. This design markedly simplifies the architecture while improving scalability and deployment efficiency. SimpleFold-3B achieves performance on par with state-of-the-art models across standard benchmarks—including CAMEO and CASP15—and enables efficient inference on consumer-grade hardware. These results demonstrate that general-purpose, architecture-agnostic designs can be both effective and competitive for protein structure prediction, challenging the prevailing reliance on specialized modules.

Technology Category

Application Category

📝 Abstract
Protein folding models have achieved groundbreaking results typically via a combination of integrating domain knowledge into the architectural blocks and training pipelines. Nonetheless, given the success of generative models across different but related problems, it is natural to question whether these architectural designs are a necessary condition to build performant models. In this paper, we introduce SimpleFold, the first flow-matching based protein folding model that solely uses general purpose transformer blocks. Protein folding models typically employ computationally expensive modules involving triangular updates, explicit pair representations or multiple training objectives curated for this specific domain. Instead, SimpleFold employs standard transformer blocks with adaptive layers and is trained via a generative flow-matching objective with an additional structural term. We scale SimpleFold to 3B parameters and train it on approximately 9M distilled protein structures together with experimental PDB data. On standard folding benchmarks, SimpleFold-3B achieves competitive performance compared to state-of-the-art baselines, in addition SimpleFold demonstrates strong performance in ensemble prediction which is typically difficult for models trained via deterministic reconstruction objectives. Due to its general-purpose architecture, SimpleFold shows efficiency in deployment and inference on consumer-level hardware. SimpleFold challenges the reliance on complex domain-specific architectures designs in protein folding, opening up an alternative design space for future progress.
Problem

Research questions and friction points this paper is trying to address.

Challenging complex domain-specific architectures in protein folding models
Developing general-purpose transformer-based protein folding without specialized modules
Achieving competitive performance using simplified architecture and flow-matching
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses flow-matching generative objective with structural term
Employs standard transformer blocks with adaptive layers
Scales to 3B parameters trained on distilled structures
🔎 Similar Papers
No similar papers found.