🤖 AI Summary
Contemporary large language models exhibit limited chemical knowledge, unreliable reasoning, and poor cross-task generalization in chemical discovery. To address these challenges, we propose a three-stage progressive training framework: (1) chemistry-specific pretraining to establish foundational domain knowledge; (2) expert-level structured reasoning path distillation to encode systematic chemical intuition; and (3) multi-task population-relative policy optimization with unified molecular and reaction-level modeling. This is the first framework to achieve end-to-end modeling of chemists’ systematic thinking. Our method achieves state-of-the-art performance across multiple authoritative chemical benchmarks: +46% improvement over Gemini-2.5-Pro and DeepSeek-R1 on molecular tasks, and +66% on reaction tasks—surpassing all existing chemistry-specialized models. The framework demonstrates superior generalization, robustness, and fidelity to chemical principles, enabling reliable, interpretable, and scalable AI-driven chemical discovery.
📝 Abstract
Although large language models (LLMs) have significant potential to advance chemical discovery, current LLMs lack core chemical knowledge, produce unreliable reasoning trajectories, and exhibit suboptimal performance across diverse chemical tasks. To address these challenges, we propose Chem-R, a generalizable Chemical Reasoning model designed to emulate the deliberative processes of chemists. Chem-R is trained through a three-phase framework that progressively builds advanced reasoning capabilities, including: 1) Chemical Foundation Training, which establishes core chemical knowledge. 2) Chemical Reasoning Protocol Distillation, incorporating structured, expert-like reasoning traces to guide systematic and reliable problem solving. 3) Multi-task Group Relative Policy Optimization that optimizes the model for balanced performance across diverse molecular- and reaction-level tasks. This structured pipeline enables Chem-R to achieve state-of-the-art performance on comprehensive benchmarks, surpassing leading large language models, including Gemini-2.5-Pro and DeepSeek-R1, by up to 46% on molecular tasks and 66% on reaction tasks. Meanwhile, Chem-R also consistently outperforms the existing chemical foundation models across both molecular and reaction level tasks. These results highlight Chem-R's robust generalization, interpretability, and potential as a foundation for next-generation AI-driven chemical discovery.