How Well Do LLMs Understand Drug Mechanisms? A Knowledge + Reasoning Evaluation Dataset

📅 2025-11-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study evaluates large language models’ (LLMs) capabilities in understanding and reasoning about drug mechanisms, focusing on factual recall of established mechanisms and causal reasoning under counterfactual scenarios. To this end, we introduce the first open-world evaluation dataset specifically designed for drug mechanism pathways and propose a novel assessment framework that jointly integrates knowledge retrieval and chain-of-thought reasoning. Crucially, we innovate by applying fine-grained counterfactual perturbations to internal steps within mechanism pathways—substantially increasing reasoning complexity. Our experimental design comprises both open-world settings (requiring autonomous knowledge recall) and closed-world settings (where factual premises are provided). Results demonstrate that o4-mini achieves the highest performance, while Qwen3-4B-thinking attains comparable or locally superior performance, validating the framework’s effectiveness and generalizability across diverse LLMs.

Technology Category

Application Category

📝 Abstract
Two scientific fields showing increasing interest in pre-trained large language models (LLMs) are drug development / repurposing, and personalized medicine. For both, LLMs have to demonstrate factual knowledge as well as a deep understanding of drug mechanisms, so they can recall and reason about relevant knowledge in novel situations. Drug mechanisms of action are described as a series of interactions between biomedical entities, which interlink into one or more chains directed from the drug to the targeted disease. Composing the effects of the interactions in a candidate chain leads to an inference about whether the drug might be useful or not for that disease. We introduce a dataset that evaluates LLMs on both factual knowledge of known mechanisms, and their ability to reason about them under novel situations, presented as counterfactuals that the models are unlikely to have seen during training. Using this dataset, we show that o4-mini outperforms the 4o, o3, and o3-mini models from OpenAI, and the recent small Qwen3-4B-thinking model closely matches o4-mini's performance, even outperforming it in some cases. We demonstrate that the open world setting for reasoning tasks, which requires the model to recall relevant knowledge, is more challenging than the closed world setting where the needed factual knowledge is provided. We also show that counterfactuals affecting internal links in the reasoning chain present a much harder task than those affecting a link from the drug mentioned in the prompt.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' factual knowledge of drug mechanisms and interactions
Assessing LLMs' reasoning ability about drug mechanisms in novel situations
Testing LLM performance on counterfactual drug mechanism scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dataset evaluates drug mechanism knowledge and reasoning
Tests LLMs with counterfactuals for novel situations
Compares model performance in open versus closed world
🔎 Similar Papers
No similar papers found.