🤖 AI Summary
Existing text-conditioned molecular generation methods employ single-step conditional encoding, failing to fully satisfy multiple structural and semantic constraints embedded in complex natural language prompts—resulting in poor interpretability and low substructure coverage. Method: We propose Chain-of-Generation, a latent diffusion-based framework that parses natural language descriptions into ordered semantic segments and performs progressive, multi-stage molecular generation. It introduces three key innovations: (1) training-free multi-stage inference, (2) post-alignment learning between text and molecule latent spaces, and (3) curriculum-style semantic guidance—collectively mitigating end-to-end text-encoding bias. Contribution/Results: On multiple benchmarks and real-world tasks, our method significantly improves semantic consistency (+18.7%), structural diversity (+22.3%), and fine-grained controllability. It is the first approach to enable stepwise, interpretable modeling and precise fulfillment of composite chemical requirements.
📝 Abstract
Text-conditioned molecular generation aims to translate natural-language descriptions into chemical structures, enabling scientists to specify functional groups, scaffolds, and physicochemical constraints without handcrafted rules. Diffusion-based models, particularly latent diffusion models (LDMs), have recently shown promise by performing stochastic search in a continuous latent space that compactly captures molecular semantics. Yet existing methods rely on one-shot conditioning, where the entire prompt is encoded once and applied throughout diffusion, making it hard to satisfy all the requirements in the prompt. We discuss three outstanding challenges of one-shot conditioning generation, including the poor interpretability of the generated components, the failure to generate all substructures, and the overambition in considering all requirements simultaneously. We then propose three principles to address those challenges, motivated by which we propose Chain-of-Generation (CoG), a training-free multi-stage latent diffusion framework. CoG decomposes each prompt into curriculum-ordered semantic segments and progressively incorporates them as intermediate goals, guiding the denoising trajectory toward molecules that satisfy increasingly rich linguistic constraints. To reinforce semantic guidance, we further introduce a post-alignment learning phase that strengthens the correspondence between textual and molecular latent spaces. Extensive experiments on benchmark and real-world tasks demonstrate that CoG yields higher semantic alignment, diversity, and controllability than one-shot baselines, producing molecules that more faithfully reflect complex, compositional prompts while offering transparent insight into the generation process.