🤖 AI Summary
Pretrained generative models for drug molecule design often suffer from premature convergence to local optima during reinforcement learning (RL)-based reward optimization, resulting in limited molecular diversity and suboptimal drug-likeness. To address this, we propose an RL framework featuring adaptive reward function updating. We systematically investigate diverse intrinsic motivation mechanisms for controlling molecular diversity and introduce a novel synergistic reward correction strategy that jointly incorporates structural similarity penalization and uncertainty-aware predictive rewards. Our method integrates graph neural networks (GNNs) with policy gradient optimization. Evaluated on multiple benchmark datasets, the generated molecule sets achieve a 37% average improvement in diversity—measured by scaffold and fingerprint dissimilarity—while maintaining or improving drug-likeness (quantitative estimate of drug-likeness, QED; synthetic accessibility, SA) and target-binding activity (pIC₅₀). The framework significantly outperforms state-of-the-art baselines, demonstrating superior balance between exploration and exploitation in de novo molecular generation.
📝 Abstract
Fine-tuning a pre-trained generative model has demonstrated good performance in generating promising drug molecules. The fine-tuning task is often formulated as a reinforcement learning problem, where previous methods efficiently learn to optimize a reward function to generate potential drug molecules. Nevertheless, in the absence of an adaptive update mechanism for the reward function, the optimization process can become stuck in local optima. The efficacy of the optimal molecule in a local optimization may not translate to usefulness in the subsequent drug optimization process or as a potential standalone clinical candidate. Therefore, it is important to generate a diverse set of promising molecules. Prior work has modified the reward function by penalizing structurally similar molecules, primarily focusing on finding molecules with higher rewards. To date, no study has comprehensively examined how different adaptive update mechanisms for the reward function influence the diversity of generated molecules. In this work, we investigate a wide range of intrinsic motivation methods and strategies to penalize the extrinsic reward, and how they affect the diversity of the set of generated molecules. Our experiments reveal that combining structure- and prediction-based methods generally yields better results in terms of molecular diversity.