Reinforcement Learning with LLM-Guided Action Spaces for Synthesizable Lead Optimization

📅 2026-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in lead compound optimization of simultaneously ensuring molecular validity and synthetic feasibility. The authors formulate the problem as a Markov decision process, leveraging a tool-augmented large language model (LLM) agent that operates within an action space constrained by validated reaction templates to dynamically identify reactive sites and generate chemically plausible transformation pathways. Their approach innovatively integrates trajectory-level Group Relative Policy Optimization (GRPO), a SMILES caching mechanism, and chemical analysis tools to guarantee that each modification step is both synthesizable and efficient. Experimental results demonstrate that the method achieves an average Top-10 score of 0.563 across 14 tasks, representing a 10.4% improvement over the strongest synthesizability-aware baseline, attains the highest sample efficiency in 10 tasks, and reduces end-to-end optimization time by approximately 43%.
📝 Abstract
Lead optimization in drug discovery requires improving therapeutic properties while ensuring that proposed molecular modifications correspond to feasible synthetic routes. Existing approaches either prioritize property scores without enforcing synthesizability, or rely on expensive enumeration over large reaction networks, while direct application of Large Language Models (LLMs) frequently produces chemically invalid structures. We introduce MolReAct, a framework that formulates lead optimization as a Markov Decision Process over a synthesis-constrained action space defined by validated reaction templates. A tool-augmented LLM agent serves as a dynamic reaction environment that invokes specialized chemical analysis tools to identify reactive sites and propose chemically grounded transformations from matched templates. A policy model trained via Group Relative Policy Optimization (GRPO) selects among these constrained actions to maximize long-term oracle reward across multi-step reaction trajectories. A SMILES-based caching mechanism further reduces end-to-end optimization time by approximately 43%. Across 13 property optimization tasks from the Therapeutic Data Commons and one structure-based docking task, MolReAct achieves an average Top-10 score of 0.563, outperforming the strongest synthesizable baseline by 10.4% in relative improvement, and attains the best sample efficiency on 10 of 14 tasks. Ablations confirm that both tool-augmented reaction proposals and trajectory-level policy optimization contribute complementary gains. By grounding every step in validated reaction templates, MolReAct produces molecules that are property-improved and each accompanied by an explicit synthetic pathway.
Problem

Research questions and friction points this paper is trying to address.

lead optimization
synthesizability
drug discovery
molecular modification
reaction templates
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-guided reinforcement learning
synthesis-constrained action space
reaction template
tool-augmented LLM
Group Relative Policy Optimization
🔎 Similar Papers
No similar papers found.