🤖 AI Summary
Joint generation of sequences and 3D structures for scientific data (e.g., materials, molecules, proteins) remains challenging: autoregressive models suffer from limited accuracy, while diffusion models struggle with discrete sequence modeling. Method: We propose the first unified autoregressive–conditional diffusion framework for co-generating sequences and structures. An autoregressive module guides diffusion training, while the diffusion module reciprocally refines sequence prediction; combined with VQ-VAE-based discrete tokenization and multimodal alignment, the framework ensures stable long-range structural generation. Contribution/Results: Our method achieves significant improvements over state-of-the-art (SOTA) in crystal structure prediction. It also establishes new SOTA performance in de novo small-molecule design, conditional generation, and long-sequence structural modeling—demonstrating substantial gains in both accuracy and generation stability.
📝 Abstract
Unified generation of sequence and structure for scientific data (e.g., materials, molecules, proteins) is a critical task. Existing approaches primarily rely on either autoregressive sequence models or diffusion models, each offering distinct advantages and facing notable limitations. Autoregressive models, such as GPT, Llama, and Phi-4, have demonstrated remarkable success in natural language generation and have been extended to multimodal tasks (e.g., image, video, and audio) using advanced encoders like VQ-VAE to represent complex modalities as discrete sequences. However, their direct application to scientific domains is challenging due to the high precision requirements and the diverse nature of scientific data. On the other hand, diffusion models excel at generating high-dimensional scientific data, such as protein, molecule, and material structures, with remarkable accuracy. Yet, their inability to effectively model sequences limits their potential as general-purpose multimodal foundation models. To address these challenges, we propose UniGenX, a unified framework that combines autoregressive next-token prediction with conditional diffusion models. This integration leverages the strengths of autoregressive models to ease the training of conditional diffusion models, while diffusion-based generative heads enhance the precision of autoregressive predictions. We validate the effectiveness of UniGenX on material and small molecule generation tasks, achieving a significant leap in state-of-the-art performance for material crystal structure prediction and establishing new state-of-the-art results for small molecule structure prediction, de novo design, and conditional generation. Notably, UniGenX demonstrates significant improvements, especially in handling long sequences for complex structures, showcasing its efficacy as a versatile tool for scientific data generation.