🤖 AI Summary
This work addresses the lack of effective training and evaluation frameworks for reasoning-based large language models in unsupervised de novo molecular generation, which hinders efficient exploration of high-scoring yet unknown chemical spaces. We propose MolRGen—the first benchmark specifically designed for unsupervised de novo molecular generation—introducing a diversity-aware top-k scoring metric and a reinforcement learning–based training paradigm that jointly optimizes molecular generation, representation learning, and property prediction. Leveraging this framework, we successfully train a 24-billion-parameter reasoning-based language model that substantially improves both the quality and diversity of generated molecules on the new benchmark. Extensive experiments systematically validate the efficacy of our approach and provide in-depth analysis of the model’s performance limits.
📝 Abstract
Recent advances in reasoning-based large language models (LLMs) have demonstrated substantial improvements in complex problem-solving tasks. Motivated by these advances, several works have explored the application of reasoning LLMs to drug discovery and molecular design. However, most existing approaches either focus on evaluation or rely on training setups that require ground-truth labels, such as molecule pairs with known property modifications. Such supervision is unavailable in \textit{de novo} molecular generation, where the objective is to generate novel molecules that optimize a desirability score without prior knowledge of high-scoring candidates. To bridge this gap, we introduce MolRGen, a large-scale benchmark and dataset for training and evaluating reasoning-based LLMs on \textit{de novo} molecular generation. Our contributions are threefold. First, we propose a setting to evaluate and train models for \textit{de novo} molecular generation and property prediction. Second, we introduce a novel diversity-aware top-$k$ score that captures both the quality and diversity of generated molecules. Third, we show our setting can be used to train LLMs for molecular generation, training a 24B LLM with reinforcement learning, and we provide a detailed analysis of its performance and limitations.