🤖 AI Summary
In computer-aided synthetic planning, existing methods struggle to generate accurate, experimentally executable procedures from reaction equations, hindering practical laboratory implementation.
Method: This paper introduces the Chemistry-Guided Chain-of-Thought Reasoning (CGR) framework and Verifiable-Reward Reinforcement Learning (RLVR), enabling end-to-end, knowledge-traceable experimental protocol generation. The approach integrates chemistry-informed reasoning chains, supervised fine-tuning, RLVR optimization, and LLM-as-a-judge automated evaluation, trained on high-quality, structured patent data.
Contribution/Results: Evaluated on both NLP-based semantic coherence and chemistry-based feasibility, our method significantly outperforms general-purpose reasoning models and retrieval-based baselines. It demonstrates strong generalization across diverse reaction types and adaptive capability in selecting appropriate experimental conditions—effectively bridging the gap between computational design and wet-lab execution.
📝 Abstract
Solving computer-aided synthesis planning is essential for enabling fully automated, robot-assisted synthesis workflows and improving the efficiency of drug discovery. A key challenge, however, is bridging the gap between computational route design and practical laboratory execution, particularly the accurate prediction of viable experimental procedures for each synthesis step. In this work, we present QFANG, a scientific reasoning language model capable of generating precise, structured experimental procedures directly from reaction equations, with explicit chain-of-thought reasoning. To develop QFANG, we curated a high-quality dataset comprising 905,990 chemical reactions paired with structured action sequences, extracted and processed from patent literature using large language models. We introduce a Chemistry-Guided Reasoning (CGR) framework that produces chain-of-thought data grounded in chemical knowledge at scale. The model subsequently undergoes supervised fine-tuning to elicit complex chemistry reasoning. Finally, we apply Reinforcement Learning from Verifiable Rewards (RLVR) to further enhance procedural accuracy. Experimental results demonstrate that QFANG outperforms advanced general-purpose reasoning models and nearest-neighbor retrieval baselines, measured by traditional NLP similarity metrics and a chemically aware evaluator using an LLM-as-a-judge. Moreover, QFANG generalizes to certain out-of-domain reaction classes and adapts to variations in laboratory conditions and user-specific constraints. We believe that QFANG's ability to generate high-quality synthesis procedures represents an important step toward bridging the gap between computational synthesis planning and fully automated laboratory synthesis.