🤖 AI Summary
Existing benchmarks struggle to evaluate large language models’ ability to reason about chemical toxicity based on biological mechanisms, often leading models to generate superficially plausible yet mechanistically incorrect explanations. This work introduces the first toxicity reasoning benchmark that integrates the Adverse Outcome Pathway (AOP) framework with experimental evidence from drug–target interaction studies, requiring models to infer organ-level adverse outcomes through a stepwise chain of molecular initiating events. By incorporating a reasoning-aware training approach, the study enables joint validation of both the model’s reasoning process and its final predictions, substantially improving the reliability of mechanistic reasoning and the accuracy of toxicity prediction. The findings demonstrate that strong predictive performance does not necessarily reflect correct mechanistic understanding.
📝 Abstract
Recent advances in large language models (LLMs) have enabled molecular reasoning for property prediction. However, toxicity arises from complex biological mechanisms beyond chemical structure, necessitating mechanistic reasoning for reliable prediction. Despite its importance, current benchmarks fail to systematically evaluate this capability. LLMs can generate fluent but biologically unfaithful explanations, making it difficult to assess whether predicted toxicities are grounded invalid mechanisms. To bridge this gap, we introduce ToxReason, a benchmark grounded in the Adverse Outcome Pathway (AOP) that evaluates organ-level toxicity reasoning across multiple organs. ToxReason integrates experimental drug-target interaction evidence with toxicity labels, requiring models to infer both toxic outcomes and their underlying mechanisms from Molecular Initiating Event (MIE) to Adverse Outcome (AO). Using ToxReason, we evaluate toxicity prediction performance and reasoning quality across diverse LLMs. We find that strong predictive performance does not necessarily imply reliable reasoning. Furthermore, we show that reasoning-aware training improves mechanistic reasoning and, consequently, toxicity prediction performance. Together, these results underscore the necessity of integrating reasoning into both evaluation and training for trustworthy toxicity modeling.