🤖 AI Summary
Current single-step retrosynthesis evaluations rely on a single ground-truth answer and Top-K accuracy, which fail to capture the openness and diversity inherent in real synthetic pathways. To address this limitation, this work proposes a novel evaluation framework tailored for large language models, shifting the focus from exact match paradigms to chemical plausibility as the core assessment criterion. The framework introduces ChemCensor, an automated evaluation tool, and leverages the CREED dataset—comprising millions of experimentally validated reactions—to establish robust training and benchmarking protocols. Experimental results demonstrate that models trained under this framework achieve significantly higher chemical plausibility compared to existing baselines, thereby validating the framework’s effectiveness and practical utility in advancing retrosynthetic prediction.
📝 Abstract
Recent progress has expanded the use of large language models (LLMs) in drug discovery, including synthesis planning. However, objective evaluation of retrosynthesis performance remains limited. Existing benchmarks and metrics typically rely on published synthetic procedures and Top-K accuracy based on single ground-truth, which does not capture the open-ended nature of real-world synthesis planning. We propose a new benchmarking framework for single-step retrosynthesis that evaluates both general-purpose and chemistry-specialized LLMs using ChemCensor, a novel metric for chemical plausibility. By emphasizing plausibility over exact match, this approach better aligns with human synthesis planning practices. We also introduce CREED, a novel dataset comprising millions of ChemCensor-validated reaction records for LLM training, and use it to train a model that improves over the LLM baselines under this benchmark.