MolAct: An Agentic RL Framework for Molecular Editing and Property Optimization

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Molecular editing and optimization require iterative property improvement while preserving chemical validity and structural similarity. This work formalizes the task as an embodied agent reinforcement learning problem—the first such formulation—and introduces a two-stage RL training framework. Within this framework, a large language model (LLM) agent collaborates with domain-specific chemical tools—including validity checkers, LogP/solubility predictors, and Tanimoto similarity calculators—to close a “reasoning–tool invocation–optimization” loop. The approach enables interpretable, iterative exploration of chemical space. Our MolEditAgent-7B achieves 100%, 95%, and 98% chemical validity for atom addition, deletion, and substitution edits, respectively. MolOptAgent-7B outperforms Claude 3.7 in LogP optimization and demonstrates balanced performance across multi-objective tasks (e.g., aqueous solubility). These results establish a new paradigm for LLM-driven, tool-augmented molecular design.

Technology Category

Application Category

📝 Abstract
Molecular editing and optimization are multi-step problems that require iteratively improving properties while keeping molecules chemically valid and structurally similar. We frame both tasks as sequential, tool-guided decisions and introduce MolAct, an agentic reinforcement learning framework that employs a two-stage training paradigm: first building editing capability, then optimizing properties while reusing the learned editing behaviors. To the best of our knowledge, this is the first work to formalize molecular design as an Agentic Reinforcement Learning problem, where an LLM agent learns to interleave reasoning, tool-use, and molecular optimization. The framework enables agents to interact in multiple turns, invoking chemical tools for validity checking, property assessment, and similarity control, and leverages their feedback to refine subsequent edits. We instantiate the MolAct framework to train two model families: MolEditAgent for molecular editing tasks and MolOptAgent for molecular optimization tasks. In molecular editing, MolEditAgent-7B delivers 100, 95, and 98 valid add, delete, and substitute edits, outperforming strong closed "thinking" baselines such as DeepSeek-R1; MolEditAgent-3B approaches the performance of much larger open "thinking" models like Qwen3-32B-think. In molecular optimization, MolOptAgent-7B (trained on MolEditAgent-7B) surpasses the best closed "thinking" baseline (e.g., Claude 3.7) on LogP and remains competitive on solubility, while maintaining balanced performance across other objectives. These results highlight that treating molecular design as a multi-step, tool-augmented process is key to reliable and interpretable improvements.
Problem

Research questions and friction points this paper is trying to address.

Develops an agentic RL framework for multi-step molecular editing and optimization
Enables LLM agents to interleave reasoning, tool-use, and property refinement
Trains specialized models for editing and optimization tasks with chemical validity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage training paradigm for molecular editing and optimization
LLM agent interleaves reasoning, tool-use, and optimization
Multi-turn interaction with chemical tools for feedback-driven edits