All-atom Diffusion Transformers: Unified generative modelling of molecules and materials

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing generative models for molecules and periodic materials are fragmented, lacking a unified framework. Method: This paper introduces ADiT—the first unified atomic-system generation framework—featuring a fully atomic, asymmetric, and periodicity-compatible input representation, coupled with a Transformer-based joint architecture integrating autoencoding and latent diffusion within a shared latent space to jointly model molecules and crystals end-to-end. Contribution/Results: ADiT achieves the first unified generative modeling of both molecular and crystalline systems, enabling scalable foundation-model paradigms. Experiments demonstrate state-of-the-art performance on QM9 and MP20, significantly improving validity and realism of generated molecules and crystals. It also delivers substantial gains in training and inference efficiency. Notably, performance scales consistently with model size up to 500 million parameters, empirically validating the feasibility and robustness of cross-system generative modeling.

Technology Category

Application Category

📝 Abstract
Diffusion models are the standard toolkit for generative modelling of 3D atomic systems. However, for different types of atomic systems - such as molecules and materials - the generative processes are usually highly specific to the target system despite the underlying physics being the same. We introduce the All-atom Diffusion Transformer (ADiT), a unified latent diffusion framework for jointly generating both periodic materials and non-periodic molecular systems using the same model: (1) An autoencoder maps a unified, all-atom representations of molecules and materials to a shared latent embedding space; and (2) A diffusion model is trained to generate new latent embeddings that the autoencoder can decode to sample new molecules or materials. Experiments on QM9 and MP20 datasets demonstrate that jointly trained ADiT generates realistic and valid molecules as well as materials, exceeding state-of-the-art results from molecule and crystal-specific models. ADiT uses standard Transformers for both the autoencoder and diffusion model, resulting in significant speedups during training and inference compared to equivariant diffusion models. Scaling ADiT up to half a billion parameters predictably improves performance, representing a step towards broadly generalizable foundation models for generative chemistry. Open source code: https://github.com/facebookresearch/all-atom-diffusion-transformer
Problem

Research questions and friction points this paper is trying to address.

Unified generative modeling for molecules and materials
Overcoming system-specific generative processes in atomic systems
Enhancing speed and performance in generative chemistry models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified latent diffusion framework for molecules and materials
Autoencoder maps all-atom representations to shared latent space
Standard Transformers enable faster training and inference
🔎 Similar Papers
No similar papers found.