🤖 AI Summary
Existing molecular optimization methods are predominantly instance-based optimizers, suffering from limited generalization and high computational costs. This work proposes a serialized molecular generation framework built upon a pretrained graph Transformer, integrated with a novel Group Relative Policy Optimization (GRPO) algorithm. During reinforcement learning fine-tuning, GRPO mitigates the variance in policy learning caused by differing difficulties of starting molecular structures through reward normalization relative to initial molecules. The approach achieves efficient and transferable multi-objective property optimization on out-of-distribution molecular scaffolds without requiring oracle calls or post-processing during inference, matching the performance of state-of-the-art instance optimizers.
📝 Abstract
Molecular design encompasses tasks ranging from de-novo design to structural alteration of given molecules or fragments. For the latter, state-of-the-art methods predominantly function as"Instance Optimizers'', expending significant compute restarting the search for every input structure. While model-based approaches theoretically offer amortized efficiency by learning a policy transferable to unseen structures, existing methods struggle to generalize. We identify a key failure mode: the high variance arising from the heterogeneous difficulty of distinct starting structures. To address this, we introduce GRXForm, adapting a pre-trained Graph Transformer model that optimizes molecules via sequential atom-and-bond additions. We employ Group Relative Policy Optimization (GRPO) for goal-directed fine-tuning to mitigate variance by normalizing rewards relative to the starting structure. Empirically, GRXForm generalizes to out-of-distribution molecular scaffolds without inference-time oracle calls or refinement, achieving scores in multi-objective optimization competitive with leading instance optimizers.