Reasoning-Driven Retrosynthesis Prediction with Large Language Models via Reinforcement Learning

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing retrosynthetic prediction methods suffer from poor generalizability, limited interpretability, and insufficient explicit integration of chemical knowledge. To address these limitations, we propose RetroDFM-R—the first large language model that synergistically integrates an explicit multi-step reasoning mechanism with chemical-rule-guided reinforcement learning. It employs a verifiable reward function to optimize the generation of synthetically viable pathways, enabling high-accuracy and high-fidelity retrosynthetic planning. RetroDFM-R supports end-to-end multi-step pathway deconstruction and achieves a state-of-the-art 65.0% top-1 accuracy on USPTO-50K. Double-blind expert evaluation confirms its chemical plausibility, and it successfully reproduces real-world multi-step syntheses of diverse pharmaceuticals and perovskite materials. This work advances both theoretical foundations—through principled incorporation of domain-specific chemical constraints—and practical applicability—by delivering robust, interpretable, and experimentally validated synthetic routes.

Technology Category

Application Category

📝 Abstract
Retrosynthesis planning, essential in organic synthesis and drug discovery, has greatly benefited from recent AI-driven advancements. Nevertheless, existing methods frequently face limitations in both applicability and explainability. Traditional graph-based and sequence-to-sequence models often lack generalized chemical knowledge, leading to predictions that are neither consistently accurate nor easily explainable. To address these challenges, we introduce RetroDFM-R, a reasoning-based large language model (LLM) designed specifically for chemical retrosynthesis. Leveraging large-scale reinforcement learning guided by chemically verifiable rewards, RetroDFM-R significantly enhances prediction accuracy and explainability. Comprehensive evaluations demonstrate that RetroDFM-R significantly outperforms state-of-the-art methods, achieving a top-1 accuracy of 65.0% on the USPTO-50K benchmark. Double-blind human assessments further validate the chemical plausibility and practical utility of RetroDFM-R's predictions. RetroDFM-R also accurately predicts multistep retrosynthetic routes reported in the literature for both real-world drug molecules and perovskite materials. Crucially, the model's explicit reasoning process provides human-interpretable insights, thereby enhancing trust and practical value in real-world retrosynthesis applications.
Problem

Research questions and friction points this paper is trying to address.

Improving accuracy in retrosynthesis prediction using LLMs
Enhancing explainability of AI-driven chemical synthesis plans
Generalizing chemical knowledge for broader applicability
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM with reinforcement learning for retrosynthesis
Chemically verifiable rewards enhance accuracy
Explicit reasoning provides interpretable insights
Situo Zhang
Situo Zhang
Shanghai Jiao Tong University
Large Language ModelsReinforcement Learning
H
Hanqi Li
X-LANCE Lab, School of Computer Science, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai 200240, China
L
Lu Chen
X-LANCE Lab, School of Computer Science, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai 200240, China
Zihan Zhao
Zihan Zhao
Shanghai Jiao Tong University
NLP
X
Xuanze Lin
School of Chemistry and Chemical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
Zichen Zhu
Zichen Zhu
Shanghai Jiao Tong University
GUI智能体,多模态大模型,人机交互
B
Bo Chen
Suzhou Laboratory, Suzhou 215123, China
X
Xin Chen
Suzhou Laboratory, Suzhou 215123, China
K
Kai Yu
X-LANCE Lab, School of Computer Science, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai 200240, China